Normal view

Received yesterday — 31 January 2026

QuantumScape Welcomes Tech Industry Veteran Geoff Ribar to Board of Directors

30 January 2026 at 19:31

QuantumScape Welcomes Tech Industry Veteran Geoff Ribar to Board of Directors

SAN JOSE, Calif.–(BUSINESS WIRE)–QuantumScape Corporation (NASDAQ: QS), a global leader in next-generation solid-state lithium-metal battery technology, today announced the appointment of Geoff Ribar to its board of directors. Ribar brings deep expertise and extensive leadership experience in the technology industry, with decades serving as CFO for companies across the sector.

Ribar was Chief Financial Officer at Cadence Design Systems from 2010 to 2017 and previously served as CFO at Telegent Systems, Matrix Semiconductor and NVIDIA Corporation, among others. He was formerly Vice President and corporate controller at Advanced Micro Devices (AMD). He also serves on the board of directors at Acacia Research Corporation, Everspin Technologies and MACOM Technology Solutions.

Dennis Segers, chairman of the QS board of directors, said:

Geoff brings decades of experience in the technology industry, and he knows what it takes to position transformational technology companies for durable success,

“We’re thrilled to have him on the QS board of directors and look forward to working closely with him to serve our mission and our shareholders.”

Geoff Ribar, said:

QS is working to bring a transformational technology to global scale,

“Energy storage is a critical enabler of future technology progress, and QS is one of the clear leaders revolutionizing the industry. I’m excited to join the board of directors at this pivotal point in the company’s history.”

READ the latest Batteries News shaping the battery market

QuantumScape Welcomes Tech Industry Veteran Geoff Ribar to Board of Directors, source

The post QuantumScape Welcomes Tech Industry Veteran Geoff Ribar to Board of Directors appeared first on Batteries News.

Nearly one million customers without power as southeast utilities respond to Winter Storm Fern

26 January 2026 at 17:54

Winter Storm Fern has exited stage right, but not before wreaking havoc on power grids across the southeastern United States. As of Monday morning at 9 am ET, more than 800,000 customers were still without electricity after Fern pummeled a vast swath of the US with snow, sleet, and ice amidst subzero temperatures.

According to live tracker PowerOutage.com, Tennessee (250,459), Mississippi (161,059), and Louisiana (127,719) have the most outages, followed by Texas (66,665), Kentucky (47,624), and South Carolina (44,114). Tens of thousands of power outages persist in Georgia, North Carolina, Virginia, and West Virginia.

A Deadly, Icy Mess

More than a dozen deaths have been blamed on the winter storm already, and perilous conditions persist through the day Monday, creating “dangerous travel and infrastructure impacts” for days, according to the National Weather Service.

For utilities, that means power poles and lines damaged or broken under the weight of ice. Predictions called for up to a staggering 1.5 inches of ice accumulation in some areas, including northern Mississippi and the western Carolinas. For reference, half an inch of ice (or less) is all it takes to down a power line and trigger widespread outages.

On Sunday, freezing rain slickened roads and brought trees and branches down, imperiling hundreds of miles of the southern US. In Corinth, Mississippi, heavy machinery manufacturer Caterpillar told employees at its remanufacturing site to stay home Monday and Tuesday. In Oxford, MS, police appealed to residents to stay home, and some utility crews were pulled from their jobs overnight.

“Due to life-threatening conditions, Oxford Utilities has made the difficult decision to pull our crews off the road for the night,” the utility company posted on Facebook early Sunday. “Trees are actively snapping and falling around our linemen while they are in the bucket trucks.”

Elsewhere, deep snow — over a foot (30 centimeters) in a 1,300-mile (2,100-kilometer) swath from Arkansas to New England — halted traffic and canceled flights.

President Donald Trump approved emergency declarations for at least a dozen states by Saturday. The Federal Emergency Management Agency had rescue teams and supplies in numerous states, Homeland Security Secretary Kristi Noem said.

Hardest-Hit Utilities and Their Response

Tennessee’s Nashville Electric Service (NES) and Entergy (primarily in Louisiana) remain the heaviest-hit utilities Monday morning. More than 175,000 Nashville Electric subscribers are without power, representing nearly 38% of its customer base. More than 147,000 Entergy users are still waiting for their lights to come back on, or roughly 5% of the total served by the utility in LA.

NES says teams of nearly 300 line workers have been deployed around the clock to make repairs and restore infrastructure. The utility says more than 76 broken poles have already been fixed. More than 70 distribution circuits are out and are being restored. Since Saturday, crews have been operating in continuous rotations and will remain on extended 14–16‑hour shifts.

Icy conditions have limited restoration progress in its territories, according to Entergy. Overnight, temperatures dropped below freezing, hampering travel and causing additional outages in some locations. As of Monday morning, the utility reported more than 88,000 outages in Louisiana and another 55,000 in Mississippi. As of Sunday evening, transmission damage assessments show approximately 20 transmission lines, 470 miles, and 20 substations out of service across Entergy’s service area. Around 10 transmission lines and 30 substations have been returned to service. At least 400 poles, 90 transformers, and 1,460 spans of wire were damaged; more than 20 poles, 20 transformers, and 70 spans of wire have been restored so far.

Duke Energy, Southwestern Electric Power Company, and Appalachian Power Company each have just north of 30,000 customers still without electricity. Tri-County EMC, Blue Ridge Electric Cooperative, North East Mississippi EPA, and Cumberland EMC are still working to restore services for more than 20,000 customers.

Reporting from the Associated Press was used in this article.

Originally published in Factor This.

Transportation and logistics providers see 2026 as critical year for technology to transform business processes

29 January 2026 at 17:48



In his 40 years leading McLeod Software, one of the nation’s largest providers of transportation management systems for truckers and 3PLs (third-party logistics providers), Tom McLeod has seen many a new technology product introduced with much hype and promise, only to fade in real-world practice and fail to mature into a productive application.

In his view, as new tech players have come and gone, the basic demand from shippers and trucking operators for technology has remained pretty much the same, straightforwardly simple and unchanged over time: “Find me a way to use computers and software to get more done in less time and [at a] lower cost,” he says.

“It’s been the same goal, from decades ago when we replaced typewriters, all the way to today finding ways to use artificial intelligence (AI) to automate more tasks, streamline processes, and make the human worker more efficient,” he adds. “Get more done in less time. Make people more productive.”

The difference between now and the pretenders of the past? McLeod and others believe that AI is the real thing and, as it continues to develop and mature, will be incorporated deeper into every transportation and logistics planning, execution, and supply chain process, fundamentally changing and forcing a reinvention of how shippers and logistics service providers operate and manage the supply chain function.

“But it is not a magic bullet you can easily switch on,” McLeod cautions. “While the capabilities look magical, at some level it takes time to train these models and get them using data properly and then come back with recommendations or actions that can be relied upon,” he adds.

THE DATA CONUNDRUM

One of the challenges is that so much supply chain data today remains highly unstructured—by one estimate, as much as 75%. Converting and consolidating myriad sources and formats of data, and ensuring it is clean, complete, and accurate remains perhaps the biggest challenge to accelerated AI adoption.

Often today when a broker is searching for a truck, entering an order, quoting a load, or pulling a status update, someone is interpreting that text or email, extracting information from the transportation management system (TMS), and creating a response to the customer, explains Doug Schrier, McLeod’s vice president of growth and special projects. “With AI, what we can do is interpret what the email is asking for, extract that, overlay the TMS information, and use AI to respond to the customer in an automated fashion,” he says.

To come up with a price quote using traditional methods might take three or four minutes, he’s observed. An AI-enabled process cuts that down to five seconds. Similarly, entering an order into a system might take four to five minutes. With AI interpreting the email string and other inputs, a response is produced in a minute or less. “So if you are doing [that task] hundreds of times a week, it makes a difference. What you want to do is get the human adding the value and [use AI] to get the mundane out of the workflow.”

Yet the growth of AI is happening across a technology landscape that remains fragmented, with some solutions that fit part of the problem, and others that overlap or conflict. Today it’s still a market where there is not one single tech provider that can be all things to all users.

In McLeod’s view, its job is to focus on the mission of providing a highly functional primary TMS platform—and then complement and enhance that with partners who provide a specialized piece of an ever-growing solution puzzle. “We currently have built, over the past three decades, 150 deep partnerships, which equates to about 250 integrations,” says Ahmed Ebrahim, McLeod’s vice president of strategic alliances. “Customers want us to focus on our core competencies and work with best-of-breed parties to give them better choices [and a deeper solution set] as their needs evolve,” he adds.

One example of such a best-of-breed partnership is McLeod’s arrangement with Qued, an AI-powered application developer that provides McLeod TMS clients with connectivity and process automation for every load appointment scheduling mode, whether through a portal, email, voice, or text.

Before Qued was integrated, there were about 18 steps a user had to complete to get an appointment back into the TMS, notes Tom Curee, Qued’s president. With Qued, those steps are reduced to virtually zero and require no human intervention.

As soon as a stop is entered into the TMS, it is immediately and automatically routed to Qued, which reaches out to the scheduling platform or location, secures the appointment, and returns an update into the TMS with the details. It eliminates manual appointment-making tasks like logging on and entering data into a portal, and rekeying or emailing, and it significantly enhances the value and efficiency of this particular workflow activity for McLeod users.

LEGACY SYSTEM PAIN

One of the effects of the three-year freight recession has been its impact on investment. Whereas in better times, logistics and trucking firms would focus on buying tech to reduce costs, enhance productivity, and improve customer service, the constant financial pressure has narrowed that focus.

“First and exclusively, it is now on ‘How do we create efficiency by replacing people and really bring cost levels down because rates are still extremely low and margins really tight,’” says Bart De Muynck, a former Gartner research analyst covering the visibility and supply chain tech space, and now principal at consulting firm Bart De Muynck LLC.

Most industry operators he’s spoken with have looked at AI. One example he cites as ripe for transformation is freight brokerages, “where you have rows and rows of people on the phone.” They are asking the question “Which of these processes or activities can we do with AI?”

Yet De Muynck points to one issue that is proving to be a roadblock to change and transformation. “For many of these companies, their foundational technology is still on older architectural platforms,’’ in some cases proprietary ones, he notes. “It’s hard to combine AI with those.” And because of years of low margins and cash flow restrictions, “they have not been able to replace their core ERP [enterprise resource planning system] or the TMS for that carrier or broker, so they are still running on very old tech.”

For those players, De Muynck says they will discover a disconcerting reality: the difficulty of trying to apply AI on a platform that is decades old. “That will yield some efficiencies, but those will be short term and limited in terms of replacing manual tasks,” he says.

The larger question, De Muynck says, is “How do you reinvent your company to become more successful? How do we create applications and processes that are based on the new architecture so there is a big [transformative] lift and shift [and so we can implement and deploy foundational pieces fairly quickly]? Then with those solutions build something with AI that is truly transformational and effective.” And, he adds, bring the workforce along successfully in the process.

“People have some things in their jobs they have to do 100 times a day,” often a menial or boring task, De Muynck adds. “AI can automate or streamline those tasks in such a way that it improves the employee’s work experience and job satisfaction, while driving efficiencies. [Rather than eliminate a position], brokers can redirect worker time to more higher-value, complex tasks that need human input, intuition, and leadership.”

“With logistics, you cannot take people completely out of the equation,” he emphasizes. “[The best AI solutions] will be a human paired up with an intelligent AI agent. It will be a combination of people [and their tribal knowledge and institutional experience] and technology,” he predicts.

EYES OPEN

Shippers, truckers, and 3PLs are experiencing an awakening around the possibilities of technologies today and what modern architecture, in-the-cloud platforms, and AI-powered agents can do, says Ann Marie Jonkman, vice president–industry advisory for software firm Blue Yonder. For many, the hardest decision is where to start. It can be overwhelming, particularly in a market environment shaped by chaos, uncertainty, and disruption, where surviving every week sometimes seems a challenge in itself.

“First understand and be clear about what you want to achieve and the problems you want to solve” with a tech strategy, she advises. “Pick two or three issues and develop clear, defined use cases for each. Look at the biggest disruptions—where are the leakages occurring and how do I start?”

Among the most frequently targeted areas of investment she sees are companies putting capital and resources into broad areas of automation, not just physical activity with robotics, but in business processes, workflows, and operations. It also is about being able to understand tradeoffs, getting ahead of and removing waste, and moving the organization from a reactionary posture to one that’s more proactive and informed, and can leverage what Jonkman calls “decision velocity.” That places a priority on not only connecting the silos, but also on incorporating clean, accurate, and actionable data into one command center or control tower. When built and deployed correctly, such central platforms can provide near-immediate visibility into supply chain health as well as more efficient and accurate management of the end-to-end process.

Those investments in supply chain orchestration not only accelerate and improve decision-making around stock levels, fulfillment, shipping choices, and overall network and partner performance, but also provide the ability to “respond to disruption and get a handle on the data to monitor and predict disruption,” Jonkman adds. It’s tying together the nodes and flows of the supply chain so “fulfillment has the order ready at the right place and the right time [with the right service]” to reduce detention and ensure customer expectations are met.

It is important for companies not to sit on the sidelines, she advises. Get into the technology transformation game in some form. “Just start somewhere,” even if it is a small project, learn and adapt, and then go from there. “It does not need to be perfect. Perfection can be the enemy of success.”

The speed of technology innovation always has been rapid, and the advent of AI and automation is accelerating that even further, observes Jason Brenner, senior vice president of digital portfolio at FedEx. “We see that as an opportunity, rather than a challenge.”

He believes one of the industry’s biggest challenges is turning innovation into adoption, “ensuring new capabilities integrate smoothly into existing operations and deliver value quickly.” Brenner adds that in his view, “innovation is healthy and pushes everyone forward.”

Execution at scale is where the rubber meets the road. “Delivering technology that works reliably across millions of shipments, geographies, and constantly changing conditions requires deep operational integration, massive data sets, and the ability to test solutions in multiple environments,” he says. “That’s where FedEx is uniquely positioned.”

DEFYING AUTOMATION NO MORE

Before the arrival of the newest forms of AI, “there were shipping tasks that had defied automation for decades,” notes Mark Albrecht, vice president of artificial intelligence for freight broker and 3PL C.H. Robinson. “Humans had to do this repetitive, time-consuming—I might even say mind-numbing—yet essential work.”

Application of early forms of AI, such as machine learning tools and algorithms, provided a hint of what was to come. CHR, which has one of the largest in-house IT development groups in the industry, has been using those for a decade.

Large language models and generative AI were the next big leap. “It’s the advent of agentic AI that opens up new possibilities and holds the greatest potential for transformation in the coming year,” Albrecht says, adding, “Agentic AI doesn’t just analyze or generate content; it acts autonomously to achieve goals like a human would. It can apply reasoning and make decisions.”

CHR has built and deployed more than 30 AI agents, Albrecht says. Collectively, they have performed millions of once-manual tasks—and generated significant benefits. “Take email pricing requests. We get over 10,000 of those a day, and people used to open each one, read it, get a quote from our dynamic pricing engine, and send that back to the customer,” he notes. “Now a proprietary AI agent does that—in 32 seconds.”

Another example is load tenders. “It used to take our people upwards of four hours to get to those through a long queue of emails,” he recalls. That work is now done by an AI agent that reads the email subject line, body, and attachments; collects other needed information; and “turns it into an order in our system in 90 seconds,” Albrecht says. He adds that if the email is for 20 orders, “the agent can handle them simultaneously in the same 90 seconds,” whereas a human would have to handle them sequentially.

Time is money for the shipper at every step of the logistics process. So the faster a rate quote is provided, order created, carrier selected, and load appointment scheduled, the greater the benefits to the shipper. “It’s all about speed to market, which whether a retailer or manufacturer, often translates into if you make the sale or keep an assembly line rolling.”

LOOKING AHEAD

Strip away all the hype, and the one tech deliverable that remains table stakes for all logistics providers and their customers are platforms that provide a timely and accurate view into where goods are and with whom, and when they will get to their destination. “First and foremost is real-time visibility that enables customer access to the movement of their product across the supply chain,” says Penske Executive Vice President Mike Medeiros. “Then, getting further upstream and allowing them to be more agile and responsive to disruptions.”

As for AI, “it’s not about replacing [workers]; it’s about pointing them in the right direction and helping [them] get more done in the same amount of time, with a higher level of service and enabling a more satisfying work experience. It’s human capital complemented by AI-powered agents as virtual assistants. We’ve already [started] down that path.”

Received before yesterday

This Valve Could Halve EV Fast-Charge Times

17 December 2025 at 19:15


Fast, direct-current charging can charge an EV’s battery from about 20 percent to 80 percent in 20 minutes. That’s not bad, but it’s still about six times as long as it takes to fill the tank of an ordinary petrol-powered vehicle.

One of the major bottlenecks to even faster charging is cooling, specifically uneven cooling inside big EV battery packs as the pack is charged. Hydrohertz, a British startup launched by former motorsport and power-electronics engineers, says it has a solution: fire liquid coolant exactly where it’s needed during charging. Its solution, announced in November, is a rotary coolant router that fires coolant exactly where temperatures spike, and within milliseconds—far faster than any single-loop system can react. In laboratory tests, this cooling tech allowed an EV battery to safely charge in less than half the time than was possible with conventional cooling architecture.

A Smarter Way to Move Coolant

Hydrohertz calls its solution Dectravalve. It looks like a simple manifold, but it contains two concentric cylinders and a stepper motor to direct coolant to as many as four zones within the battery pack. It’s installed in between the pack’s cold plates, which are designed to efficiently remove heat from the battery cells through physical contact, and the main coolant supply loop, replacing a tangle of valves, brackets, sensors, and hoses.

To keep costs low, Hydrohertz designed Dectravalve to be produced with off-the-shelf materials, and seals, as well as dimensional tolerances that can be met with the fabrication tools used by many major parts suppliers. Keeping things simple and comparatively cheap could improve Dectravalve’s chances of catching on with automakers and suppliers notorious for frugality. “Thermal management is trending toward simplicity and ultralow cost,” says Chao-Yang Wang, a mechanical and chemical engineering professor at Pennsylvania State University whose research areas include dealing with issues related to internal fluids in batteries and fuel cells. Automakers would prefer passive cooling, he notes—but not if it slows fast charging. So, at least for now, Intelligent control is essential.

“If Dectravalve works as advertised, I’d expect to see a roughly 20 percent improvement in battery longevity, which is a lot.”–Anna Stefanopoulou, University of Michigan

Hydrohertz built Dectravalve to work with ordinary water-glycol, otherwise known as antifreeze, keeping integration simple. Using generic antifreeze avoids a step in the validation process where a supplier or EV manufacturer would otherwise have to establish whether some special formulation is compatible with the rest of the cooling system and doesn’t cause unforeseen complications. And because one Dectravalve can replace the multiple valves and plumbing assemblies of a conventional cooling system, it lowers the parts count, reduces leak points, and cuts warranty risk, Hydrohertz founder and CTO Martyn Talbot claims. The tighter thermal control also lets automakers shrink oversize pumps, hoses, and heat exchangers, improving both cost and vehicle packaging.

The valve reads battery-pack temperatures several times per second and shifts coolant flow instantly. If a high-load event—like a fast charge—is coming, it prepositions itself so more coolant is apportioned to known hot spots before the temperature rises in them.

Multizone control can also speed warm-up to prevent the battery degradation that comes from charging at frigid temperatures. “You can send warming fluid to heat half the pack fast so it can safely start taking load,” says Anna Stefanopoulou, a professor of mechanical engineering at the University of Michigan who specializes in control systems, energy, and transportation technologies. That half can begin accepting load, while the system begins warming the rest of the pack more gradually, she explains. But Dectravalve’s main function remains cooling fast-heating troublesome cells so they don’t slow charging.

Quick response to temperature changes inside the battery doesn’t increase the cooling capacity, but it leverages existing hardware far more efficiently. “Control the coolant with more precision and you get more performance for free,” says Talbot.

Charge Times Can Be Cut By 60 Percent

In early 2025, the Dectravalve underwent bench testing conducted by the Warwick Manufacturing Group (WMG), a multidisciplinary research center at the University of Warwick, in Coventry, England, that works with transport companies to improve the manufacturability of battery systems and other technologies. WMG compared Dectravalve’s cooling performance with that of a conventional single-loop cooling system using the same 100-kilowatt-hour battery pack. During fast-charge trials from 10 percent to 80 percent, Dectravalve held peak cell temperature below 44.5 °C and kept cell-to-cell temperature variation to just below 3 °C without intervention from the battery management system. Similar thermal performance for the single-loop system was made possible only by dialing back the amount of power the battery would accept—the very tapering that keeps fast charging from being on par with gasoline fill-ups.

Keeping the cell temperatures below 50 °C was key, because above that temperature lithium plating begins. The battery suffers irreversible damage when lithium starts coating the surface of the anode—the part of the battery where electrical charge is stored during charging—instead of filling its internal network of pores the way water does when it’s absorbed by a sponge. Plating greatly diminishes the battery’s charge-storage capacity. Letting the battery get too hot can also cause the electrolyte to break down. The result is inhibited flow of ions between the electrodes. And reduced flow within the battery means reduced flow in the external circuit, which powers the vehicle’s motors.

Because the Dectravalve kept temperatures low and uniform—and the battery management system didn’t need to play energy traffic cop and slow charging to a crawl to avoid overheating—charging time was cut by roughly 60 percent. With Dectravalve, the battery reached 80 percent state of charge in between 10 and 13 minutes, versus 30 minutes with the single-cooling-loop setup, according to Hydrohertz.


When Batteries Keep Cool, They Live Longer

Using Warwick’s temperature data, Hydrohertz applied standard degradation models and found that cooler, more uniform packs last longer. Stefanopoulou estimates that if Dectravalve works as claimed, it could boost battery life by roughly 20 percent. “That’s a lot,” she says.

Still, it could be years before the system shows up on new EVs, if ever. Automakers will need years of cycle testing, crash trials, and cost studies before signing off on a new coolant architecture. Hydrohertz says several EV makers and battery suppliers have begun validation programs, and CTO Talbot expects licensing deals to ramp up as results come in. But even in a best-case scenario, Dectravalve won’t be keeping production-model EV batteries cool for at least three model years.

Electrifying Everything Will Require Multiphysics Modeling

16 October 2025 at 15:00


A prototyping problem is emerging in today’s efforts to electrify everything. What works as a lab-bench mockup breaks in reality. Harnessing and safely storing energy at grid scale and in cars, trucks, and planes is a very hard problem that simplified models sometimes can’t touch.

“In electrification, at its core, you have this combination of electromagnetic effects, heat transfer, and structural mechanics in a complicated interplay,” says Bjorn Sjodin, senior vice president of product management at the Stockholm-based software company COMSOL.

COMSOL is an engineering R&D software company that seeks to simulate not just a single phenomenon—for instance, the electromagnetic behavior of a circuit—but rather all the pertinent physics that needs to be simulated for developing new technologies in real-world operating conditions.

Engineers and developers gathered in Burlington, Mass. on 8 to 10 October for COMSOL’s annual Boston conference, where they discussed engineering simulations via multiple simultaneous physics packages. And multiphysics modeling, as the emerging field is called, has emerged as a component of electrification R&D that is becoming more than just nice to have.

“Sometimes, I think some people still see simulation as a fancy R&D thing,” says Niloofar Kamyab, a chemical engineer and applications manager at COMSOL. “Because they see it as a replacement for experiments. But no, experiments still need to be done, though experiments can be done in a more optimized and effective way.”

Can Multiphysics Scale Electrification?

Multiphysics, Kamyab says, can sometimes be only half the game.

“I think when it comes to batteries, there is another attraction when it comes to simulation,” she says. “It’s multiscale—how batteries can be studied across different scales. You can get in-depth analysis that, if not very hard, I would say is impossible to do experimentally.”

In part, this is because batteries reveal complicated behaviors (and runaway reactions) at the cell level but also in unpredictable new ways at the battery-pack level as well.

“Most of the people who do simulations of battery packs—thermal management is one of their primary concerns,” Kamyab says. “You do this simulation so you know how to avoid it. You recreate a cell that is malfunctioning.” She adds that multiphysics simulation of thermal runaway enables battery engineers to safely test how each design behaves in even the most extreme conditions—in order to stop any battery problems or fires before they could happen.

Wireless charging systems are another area of electrification, with their own thermal challenges. “At higher power levels, localized heating of the coil changes its conductivity,” says Nirmal Paudel, a lead engineer at Veryst Engineering, a consulting firm based in Needham, Mass. And that, he notes, in turn can change the entire circuit as well as the design and performance of all the elements that surround it.

Electric motors and power converters require similar simulation savvy. According to electrical engineer and COMSOL senior application engineer Vignesh Gurusamy, older ways of developing these age-old electrical workhorse technologies are proving less useful today. “The recent surge in electrification across diverse applications demands a more holistic approach as it enables the development of new optimal designs,” Gurusamy says.

And freight transportation: “For trucks, people are investigating, Should we use batteries? Should we use fuel cells?” Sjodin says. “Fuel cells are very multiphysics friendly—fluid flow, heat transfer, chemical reactions, and electrochemical reactions.”

Lastly, there’s the electric grid itself. “The grid is designed for a continuous supply of power,” Sjodin says. “So when you have power sources [like wind and solar] shutting off and on all the time, you have completely new problems.”

Multiphysics in Battery and Electric-Motor Design

Taking such an all-in approach to engineering simulations can yield unanticipated upsides as well, says Kamyab. Berlin-based automotive engineering company IAV, for example, is developing power-train systems that integrate multiple battery formats and chemistries in a single pack. Sodium ion cannot give you the energy that lithium ion can give,” Kamyab says. “So they came up with a blend of chemistries, to get the benefits of each, and then designed a thermal management that matches all the chemistries.”

Jakob Hilgert, who works as a technical consultant at IAV, recently contributed to a COMSOL industry case study. In it, Hilgert described the design of a dual-chemistry battery pack that combines sodium-ion cells with a more costly lithium solid-state battery.

Hilgert says that using multiphysics simulation enabled the IAV team to play the two chemistries’ different properties off of each other. “If we have some cells that can operate at high temperatures and some cells that can operate at low temperatures, it is beneficial to take the exhaust heat of the higher-running cells to heat up the lower-running cells, and vice versa,” Hilgert said. “That’s why we came up with a cooling system that shifts the energy from cells that want to be in a cooler state to cells that want to be in a hotter state.”

According to Sjodin, IAV is part of a larger trend in a range of industries that are impacted by the electrification of everything. “Algorithmic improvements and hardware improvements multiply together,” he says. “That’s the future of multiphysics simulation. It will allow you to simulate larger and larger, more realistic systems.”

According to COMSOL’s Gurusamy, GPU accelerators and surrogate models allow for bigger jumps in electric-motor capabilities and efficiencies. Even seemingly simple components like the windings of copper wire in a motor core (called stators) provide parameters that multiphysics can optimize.

“A primary frontier in electric-motor development is pushing power density and efficiency to new heights, with thermal management emerging as a key challenge,” Gurusamy says. “Multiphysics models that couple electromagnetic and thermal simulations…incorporate temperature-dependent behavior in stator windings and magnetic materials.”

Simulation is also changing the wireless charging world, Paudel says. “Traditional design cycles tweak coil geometry,” he says. “Today, integrated multiphysics platforms enable exploration of new charging architectures,” including flexible charging textiles and smart surfaces that adapt in real time.

And batteries, according to Kamyab, are continuing a push toward higher power densities and lower prices. Which is changing not just the industries where batteries are already used, like consumer electronics and EVs. Higher-capacity batteries are also driving new industries like electric vertical take-off and landing aircraft (eVTOLs).

“The reason that many ideas that we had 30 years ago are becoming a reality is now we have the batteries to power them,” Kamyab says. “That was the bottleneck for many years.... And as we continue to push battery technology forward, who knows what new technologies and applications we’re making possible next.”

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

20 January 2026 at 14:30

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions

15 January 2026 at 14:00

ESI Total Fuel Management is expanding its Hydrotreated Vegetable Oil (HVO/R99) services to help data centers and other mission-critical facilities advance their sustainability strategies without sacrificing reliability. With this move, the company is deepening its role as a long-term partner for operators pursuing Net-Zero 2030 goals in an increasingly demanding digital infrastructure landscape.​

Advancing data center sustainability

Across the data center industry, operators are under growing pressure to reduce the environmental impact of standby power systems while maintaining assured uptime. ESI draws on decades of experience in fuel lifecycle management, having previously championed ultra-low sulfur diesel adoption, to guide customers through the transition to renewable diesel.​

To support practical and scalable adoption, ESI has established the first secure HVO/R99 supply chain on the East Coast, giving operators dependable access to renewable diesel as part of a long-term fuel strategy. This infrastructure enables data center and mission-critical operators to integrate HVO into their operations as a realistic step toward emissions reduction and operational continuity.​

Renewable diesel performance benefits

HVO/R99 can reduce carbon emissions by up to 90 percent compared with conventional diesel, while maintaining strong cold-weather performance and long-term fuel stability suited to standby generator storage cycles. As a drop-in fuel, it requires no modifications to existing infrastructure and directly supports Scope 1 emissions reduction initiatives.​

Integrated lifecycle approach

Within ESI’s broader portfolio, HVO is one component of a comprehensive approach encompassing fuel quality, monitoring, compliance, and system resiliency.

“Sustainability goals do not replace the need for resiliency, and they can be complementary,” said Alex Marcus, CEO and president of ESI Total Fuel Management. “Our focus is helping customers implement solutions that are technically sound and operationally proven. By managing the entire fuel lifecycle, from supply and storage to monitoring, consumption, and pollution control, we help customers reduce environmental impact while maintaining resilient, mission-critical systems.”​

Supporting Net-Zero 2030 objectives

For data center operators pursuing Net-Zero 2030, ESI provides the engineering expertise, infrastructure, and operational support needed to move beyond isolated initiatives toward coordinated, data-driven fuel strategies. This combination of renewable fuel options and full lifecycle management helps strengthen both sustainability and resiliency for mission-critical environments.​

Read the full release here.

The post ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions appeared first on Data Center POST.

Transforming Infrastructure with Business and Technology

15 December 2025 at 16:00

The infra/STRUCTURE Summit 2025, held recently at The Wynn in Las Vegas on October 15-16, brought together industry leaders from Innovorg and Syntax to discuss the intersection of business strategy and technological innovation.

This infra/STRUCTURE Summit 2025 session spotlighted pivotal discussions on how organizations are shifting their models, developing AI capabilities, and managing talent for sustainable growth. Understanding these insights is crucial for infrastructure professionals aiming to stay ahead in a rapidly evolving landscape.

Key Voices and Perspectives

The session was led by prominent platform innovator, Elya McCleave, CEO of Innovorg, a strategic leader passionate about integrating business and technology, and whose company is at the forefront of digital transformation. Additionally, Christian Primeau, CEO of Syntax, shared insights on leadership and the importance of curiosity and continuous education, drawing from its experience with product engagement and strategic positioning. Together, they emphasized the significance of a clear strategy, identity, and skill management in driving growth.

Throughout the discussion, McCleave underscored the importance of leveraging mergers and acquisitions for talent addition, citing Innovorg’s efforts in developing over 700 AI agents through employee training programs.

“This initiative illustrates the ongoing shift from a data center-centric model to an ecosystem-focused approach,” said McCleave, “which has drastically reduced data center revenue from 90% to less than 1%.”

They also highlighted the importance of strategic market exploration, exploring talent sourcing in Korea and Argentina, and fostering employee mobility through programs like “global tourism.”

Major Takeaways and Their Relevance

Speaking extensively about the connection between business and technology integration, McCleave emphasized that success depends on a clear strategy, strong identity, and skill management.

“Leaders are focusing on defining clear identities, strategies, and service models that create sustainable value,” said McCleave. “The commitment to upskilling over 3,000 employees in AI and automation reflects a proactive approach to future-proofing the workforce. Whereas, the bottom-up model of AI agent development demonstrates empowering employees to innovate directly.”

The shift away from traditional data centers toward a broader ecosystem model signifies a fundamental change in infrastructure operations. This evolution enhances agility and trust management across multi-cloud environments.

“Exploring global markets and flexible work arrangements indicate a strategic move to attract and retain top talent,” said McCleave. “Programs allowing employees to work remotely from diverse locations reinforce this commitment.”

Primeau contributed perspectives on leadership and engagement, emphasizing curiosity and lifelong learning as key elements of effective leadership.

“I encourage integrating personal experiences and storytelling into leadership development to strengthen connection and authenticity,” said Primeau. “Managing an extensive portfolio of applications across multiple clouds requires sophistication, trust, and strategic orchestration,” he said. “This is an essential focus area for modern infrastructure teams.”

Final Thoughts and Call to Action

These insights from the infra/STRUCTURE Summit 2025 demonstrate that innovation, strategic talent management, and technological agility are pivotal for infrastructure success. As organizations look to the future, embracing these trends will not only drive growth but also ensure resilience in an increasingly complex digital landscape.

Infra/STRUCTURE 2026: Save the date

Join industry leaders and pioneers to explore new horizons in infrastructure innovation. To tune in live, receive all presentations, gain access to C-level executives, investors and/or industry-leading research, save the date for infra/STRUCTURE 2026. It will be held October 7-8, 2026, at The Wynn Las Vegas. Pre-Registration for next year’s event is now open, so visit www.infrastructuresummit.io to learn more.

The post Transforming Infrastructure with Business and Technology appeared first on Data Center POST.

Ensuring Equipment Safety and Reliability in Data Centers

13 November 2025 at 15:00

What keeps data center operators up at night? Among other things, worries about the safety and reliability of their assets. Staying competitive, maintaining 24/7 uptime, and meeting customer demand can all seem like overwhelming tasks – especially while operating on a lean budget.

The good news is that safety and reliability are very compatible goals, especially in the data center. An efficient, proactive maintenance strategy will deliver both greater reliability and increased security, so that your data center can support ever-growing demand while maintaining the trust of its customers.

In this article, I’ll talk about the best practices for maintenance teams tasked with increasing safety and uptime. I’ll explain how choosing the right tools can help your data center thrive and scale, without increasing costs.

Baking In Safety and Efficiency 

Solid maintenance practices start at the commissioning stage.

There’s no getting around the fact that a data center build is labor-intensive and demanding. Every single connection, electrical point, and fiber optic cable needs to be tested and verified. If you’re not careful, the commissioning stage has enormous potential for error and wasted resources, especially in a hyperscale location. Here’s how to solve that problem.

Choose Your Tools Wisely

It’s important to use the right tools and build efficiencies into the commissioning stage. Think of this stage as an opportunity to design a process that makes sense for your crew and your resources.

If you’re working with a lean maintenance crew, make sure to use tools that are purpose-built for ease of use, so that everyone on your team can achieve high-quality results right away. Look for cable testers, Optical Time Domain Reflectometers, and Optical Loss Test Sets that are designed with intuitive interfaces and settings.

Select tools that comply with, or exceed, industry standards for accuracy. Precision results will make a huge difference when it comes to the long-term lifespan of your assets. Getting accurate readings the first time also eliminates the need for re-work.

Opt for Safety and Efficiency

As always, safety and efficiency go hand in hand. When you’re building a large or hyperscale data center, small gains in efficiency add up quickly. If your tools allow you to test each connection point just a few seconds more quickly, you’ll see significant savings by the end of the data center construction.

Once the commissioning stage is complete, it’s a question of consolidating your efficiency gains, and finding new ways to keep your data center resilient without raising costs. Let’s see what that looks like.

Using Non-Contact Tools for Safety and Efficiency

Once your data center is fully built, I recommend implementing non-contact tools as far as possible. Done right, this will drastically improve your uptime and performance, while reducing overall costs.

What does non-contact look like? For some equipment, like the pumps and motors that support your cooling equipment, wireless sensors can monitor asset health in real time, tracking vibration levels and temperature.

Using Digital and AI Tools

Tools like a CMMS, or an AI-powered diagnostic engine, sift through asset health data to pinpoint early indications of an emerging fault. Today’s AI tools are trained on billions of data points and can recognize faults in assets and component parts. They can even determine the fault severity level and issue detailed reports on the health of every critical asset in the facility.

Once the fault is identified, CMMS creates a work order and a technician examines the asset, making repairs as needed. For lean maintenance crews, digital tools free up valuable time and labor, so that experienced technicians can focus on carrying out repairs, instead of reading machine tests or generating work orders.

The bottom line: real-time wireless monitoring keeps your technicians safe, eliminating the need for route-based testing with a handheld device. No more sending workers to squeeze into tight spaces or behind machinery just to get a measurement. By extension, no more risk of human error or inaccurate readings. Digital tools don’t make careless mistakes, no matter how often they perform the same task.

Of course, wireless monitoring isn’t the only non-contact approach out there.

Bringing in the bots

It’s now increasingly common to send robots into the data center to perform basic tests. This accomplishes the crucial function of keeping people out of the data center, where they could potentially hurt themselves or damage something.

I often see robots used to perform thermal imaging tests. Thermal imaging is a key element in many maintenance processes, especially in the data center. It’s the best means of pinpointing electrical faults, wiring issues, faulty connections, and other early indicators of major issues.

Using a robot to conduct the testing (or a mounted, non-contact thermal imager) allows you to monitor frequently, for accurate and precise results. This also protects your team from potential dangers like arc flashes and electrical shocks.

Opening the (infrared) window

Infrared windows, installed directly into power cabinets, make power quality monitoring both safer and more efficient. This is by far the safest approach for operators and technicians. It also guarantees readings will be taken regularly and speeds up the measurement process, by eliminating the time-consuming permitting step. The more frequently your team takes readings, the more effectively they can identify emerging issues and get ahead of the serious faults that could impact your assets and your whole facility.

Successful scaling through automation

Standardizing and automating workflows can enable fast, effective scaling. These processes also extend the reach of lean maintenance teams, so that managers can oversee larger facilities while still delivering high performance.

Automated monitoring and testing – with wireless tools, robots, and non-contact technology—deliver data in near real-time. When you pair this with AI, or with data analytic software, you’ll be able to identify emerging asset faults long before they become serious enough to cause downtime. This predictive technology enables far greater uptime and productivity, while also extending the lifespan of your assets.

Automated AI diagnostic tools, condition monitoring, and robotic testing all enable data centers to scale and to continue to deliver the speed and performance that today’s digitalized economy relies on.

# # #

About the Author

Mike Slevin is a General Manager (Networks, Routine Maintenance & Process Instrument) at Fluke, a company known worldwide for its electronic test and measurement tools. Mike works with data centers and industrial clients to improve energy efficiency, safety, and reliability through better monitoring and maintenance practices.

The post Ensuring Equipment Safety and Reliability in Data Centers appeared first on Data Center POST.

Data Center Outsourcing Market to Surpass USD 243.3 Billion by 2034

12 November 2025 at 15:00

The global data center outsourcing market was valued at USD 132.3 billion in 2024 and is estimated to grow at a CAGR of 6.4% to reach USD 243.3 billion by 2034, according to a recent report by Global Market Insights Inc.

The demand for data center outsourcing continues to rise as businesses increasingly pursue flexible and secure infrastructure solutions. Organizations are embracing hybrid cloud strategies that combine the control of private clouds with the agility of public cloud services. This approach enables companies to scale operations while maintaining tighter oversight of critical data. Outsourcing providers are now offering integrated solutions that span both private and public environments, optimizing performance and cost management simultaneously.

As emerging technologies such as 5G, IoT, and real-time applications gain momentum, enterprises are turning to edge computing for faster processing at the source of data generation. This has led to a shift toward more distributed outsourcing models, where smaller, decentralized facilities are placed closer to end users. In the US, hyperscale operators, including Microsoft Azure, Google Cloud, AWS, and IBM, are leading the outsourcing movement with their massive infrastructure and capacity to support enterprises at scale without hefty upfront investments. Meanwhile, data privacy frameworks such as HIPAA, FINRA, and CCPA are shaping outsourcing demand, driving businesses to work with providers who offer certified facilities, robust compliance support, and regional regulatory alignment.

The data center outsourcing market from the hardware segment captured 43.7% share in 2024 and is projected to grow at a CAGR of 6.4% through 2034. With rising data volumes and evolving technologies, organizations are opting to outsource hardware management to cut capital expenses and adopt an operating cost model. Managing in-house infrastructure upgrades proves cost-intensive and time-consuming, which is why outsourcing hardware services has become a preferred path for scalability and agility.

The power and cooling infrastructure segment is expected to register a CAGR of 8.7% from 2025 to 2034. Outsourcing providers are introducing advanced energy management solutions to support high-performance computing environments. Technologies such as AI-based temperature control, liquid cooling, and free cooling are being adopted to handle heat generated by dense workloads while also reducing energy consumption and enhancing system reliability.

The United States data center outsourcing market held a 76.1% share in 2024, generating USD 34.8 billion. The US remains a global hub for data centers, driven by the presence of major providers such as Equinix, Amazon Web Services (AWS), Verizon Communications, and Google Cloud. As regulatory frameworks become more complex, companies increasingly seek third-party partners with the compliance credentials and infrastructure to navigate evolving data privacy laws. Canada’s enterprise market is also transitioning toward cloud-driven outsourcing models, prioritizing speed, innovation, and cross-platform orchestration. Providers with strong hybrid and multi-cloud capabilities are seeing increased traction across the region.

Key companies operating in the global data center outsourcing market include Cognizant, Tata Consultancy Services (TCS), Fujitsu, Accenture, Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Equinix, Verizon Communications, and Digital Realty. To strengthen their position in the competitive data center outsourcing space, companies are focusing on expanding global infrastructure, integrating edge computing solutions, and offering hybrid and multi-cloud management platforms. Strategic investments are being made in AI-based automation for data management, energy optimization, and real-time monitoring. Providers are also forming alliances with hyperscale cloud vendors to co-deliver scalable services while ensuring compliance with evolving regional regulations. Emphasis is being placed on offering flexible service models, cost-effective infrastructure-as-a-service (IaaS), and dedicated support for industry-specific compliance, like healthcare or finance.

The post Data Center Outsourcing Market to Surpass USD 243.3 Billion by 2034 appeared first on Data Center POST.

Vector acquires YardView to improve dock visibility

22 January 2026 at 22:25



Workflow platform provider Vector today said it has acquired YardView, a vendor of yard management and dock visibility, saying the move will expand its capabilities across gate, yard, dock, and document workflows for logistics networks.

By unifying its digital workflows with YardView's real-time asset tracking, Vector said it will close the "visibility gap" between the transfer of custody. The acquisition addresses growing demand for unified, real-time yard execution as logistics operators face labor constraints, rising detention costs, and increasing pressure to digitize handoffs across the supply chain, the San Francisco-based company said.

"By combining Vector's e-BOL powered workflows with YardView's dock and visibility solutions, we're creating a comprehensive offering that addresses the full spectrum of yard operations," said Will Chu, Vector co-founder and CEO. "This acquisition allows us to serve our customers more completely, from small operations to complex enterprise environments."

Terms of the deal were not disclosed, but the firms said that YardView President Nathan Harris, who has led the company for more than 10 years, will retain equity ownership and continue to advise the business. And YardView Chief Operating Officer Heather Giordano will lead the YardView business unit inside Vector.

❌