Sion Power is expanding its Licerion® lithium-metal battery program to supply cells and battery systems for US defense and aerospace. The cells are engineered to exceed 500 Wh/kg, up to 200 Wh/kg more than current advanced lithium-ion technology, even with silicon anode enhancements.
The platform covers both primary (single-discharge) and secondary (rechargeable) configurations. Target applications include long-endurance UAS, tactical and counter-UAS drones, missile and loitering munition platforms, autonomous maritime and ground vehicles and space systems. Sion Power operates a 110,000 sq ft cell manufacturing facility in Tucson, Arizona, and says it can demonstrate cells and integrated battery systems today, and expects initial product shipments in late 2026.
Lithium-metal anodes store substantially more energy per kilogram than graphite because lithium metal is lighter and more electrochemically active. For weight-constrained platforms, closing the gap from 300-350 Wh/kg for advanced Li-ion to 500+ Wh/kg translates directly into longer endurance and expanded payload capacity. Sion Power’s expansion also responds to US policy momentum—NDAA provisions support domestic battery supply chains and highlight demand for American-manufactured advanced cells.
“Our lithium-metal technology provides the step-change in energy density required to support longer-range missions, increased flight duration and higher payload capability while maintaining a U.S.-based manufacturing capability aligned with national security priorities,” said Pamela Fletcher, CEO of Sion Power.
“By combining high-energy lithium-metal chemistry with advanced battery pack engineering, Sion Power enables defense integrators to unlock two to three times increases in mission endurance, significantly extended operational range and dramatically higher payload capacity compared with conventional lithium-ion and lithium-polymer batteries used in today’s unmanned systems,” said Tracy Kelley, chief science officer at Sion Power.
Vishay Intertechnology has launched the VODA1275, an automotive-grade photovoltaic MOSFET driver that delivers 8 mm creepage distance and CTI 600 mold compound in a compact SMD-4 package. The device targets high voltage automotive applications including pre-charge circuits, wall chargers, and battery management systems for EVs and HEVs.
The VODA1275 delivers 20 V open circuit voltage, 20 μA short circuit current, and 80 μs turn-on time—three times faster than competing devices, according to Vishay. The driver provides reinforced isolation with a working isolation voltage of 1260 Vpeak and isolation test voltage of 5300 VRMS, making it suitable for 800 V+ battery systems. The device is AEC-Q102 qualified and meets automotive reliability standards.
The high open circuit voltage allows designers to use a single MOSFET driver instead of two drivers in series, which was previously required for higher voltage applications. This simplifies circuit design and reduces component count in systems that need to drive MOSFETs and IGBTs reliably at high voltages. The driver can also enable custom solid-state relays to replace electromechanical relays in next-generation vehicles.
The optically isolated device draws power from an infrared emitter on the low voltage side, eliminating the need for an external power supply on the isolated side. “The VODA1275 features the industry’s fastest turn-on times and the highest open circuit voltage and short circuit current in its class,” the company stated. The driver is RoHS-compliant and halogen-free. Samples and production quantities are available now with eight-week lead times, priced at $1.20 per piece for US delivery.
Magna, one of the world’s largest automotive suppliers, has introduced DHD REX, a single-motor dedicated hybrid drive for range extended electric vehicles (REEVs). The ready-to-integrate system is built on a modular architecture designed for OEMs operating across markets with different regulatory requirements, infrastructure conditions and customer expectations.
DHD REX runs in three modes: pure electric driving, a generating mode in which the ICE charges the battery for range extension, and an optional parallel hybrid mode for highway performance. The single-motor design reduces cost and packaging complexity compared to dual-motor configurations. Magna says the system is validated across B through E vehicle segments in AWD layouts including SUVs, and integrates into both ICE-based platforms and BEV-derived architectures.
In a range extended EV, the combustion engine runs as a generator in most conditions rather than driving the wheels—the electric motor handles propulsion. DHD REX’s optional parallel mode adds the ability for the ICE to contribute mechanical drive at highway speeds, where the efficiency penalty of the generator-motor conversion path is most pronounced.
DHD REX complements Magna’s DHD Duo, a dual e-motor dedicated hybrid already in series production. The single-motor architecture targets OEMs that want range extension capability without the cost and packaging of a two-motor system, and the modular design adapts to both ICE-based platforms being electrified and native BEV architectures adding a range extender.
“DHD REX reflects our commitment to adaptable, customer-focused solutions that support a wide range of performance and market expectations,” said Diba Ilunga, President Magna Powertrain.
Off-the-shelf controllers with safety certifications are giving e-mobility engineers a false sense of security.
An off-the-shelf BMS with a third-party functional safety certification sounds like a solved problem. SIL-rated, ASIL-rated, ready to drop into your e-mobility battery pack. But according to Rich Byczek, Global Chief Engineer for Batteries at Intertek, that certification probably doesn’t cover what you think it covers.
“Certified BMS systems, meaning certified systems that have functional safety certifications from a third party, don’t necessarily address these functions,” Byczek told Charged during a recent webinar (now available to watch on demand). “They just look at the controller as a more generic electrical system.”
The problem: most certifications evaluate the controller hardware against a general integrity standard (IEC 61508, ISO 26262 or ISO 13849). They verify that the electronics are reliable. They don’t verify that the controller monitors individual cell voltages, manages cell-level temperature limits or handles the specific failure modes of lithium-ion chemistry.
Fuses don’t protect at the cell level
The gap is sharpest with passive protection. A pack-level fuse can interrupt a gross overcurrent event, but it’s blind to an individual cell in a series string being driven past its voltage limits. That requires active, per-cell monitoring, and a generic certified controller may not have the inputs and outputs to deliver it.
For e-mobility systems specifically, Byczek stressed that the failure modes and effects analysis (FMEA) must evaluate overvoltage, undervoltage, overcharge, overdischarge, over- and under-temperature, short circuit and excessive current, all at the cell level. “We look at those at the cell level, not only at the macro or battery pack level,” he said.
This is a different world from portable devices, where legacy standards like IEC 62133 rely on type tests and single-fault evaluations. Those standards were designed for products a user could set down and walk away from.
E-mobility doesn’t work that way. “You’re literally riding on top of that battery, potentially going at a fairly high speed,” said Byczek. “You can’t just get away from it.”
Start with the FMEA, not the certificate
The fix isn’t complicated, but it does require work. Start with an FMEA that covers every safety-critical function your BMS must perform, at the cell level. Then verify that your controller (certified or not) actually has the architecture to deliver each one. A certified controller is a starting point, not a finish line.
The standards themselves can be mixed and matched. SIL, ASIL and Performance Levels don’t map one-to-one, but regulators accept cross-framework approaches as long as your risk assessment demonstrably covers every identified hazard. For BMS systems, you’re typically targeting SIL 2, ASIL B or PLc, but the specific level matters less than proving your system can fail safely when a sensor drifts, a resistor opens or a communication link drops.
For teams pivoting from automotive EV programs into adjacent markets like forklifts, floor scrubbers and personal mobility devices, this is the adjustment that matters most. The batteries may be smaller, but the safety obligations are not.
Watch the full webinar: Rich Byczek’s complete presentation on applying functional safety to e-mobility battery systems is available on demand.
ENNOVI has secured a German patent for its adhesive-free lamination technology for battery cell contacting systems (CCS). The laser-based process eliminates the adhesives used in conventional hot and cold lamination, and the company says the technology is already validated—meaning OEMs can adopt it without having to prove out the manufacturing process themselves.
CCS components connect and integrate individual cells within a battery module, typically combining busbars, voltage sense lines and the physical laminate layers that hold them together. Conventional CCS lamination bonds those layers using adhesives in hot or cold press processes. ENNOVI’s laser lamination achieves the same bond without adhesive material. The technology supports cylindrical, prismatic and soft pouch cell architectures. With this patent, ENNOVI now offers three lamination options (hot, cold and adhesive-free) for its CCS designs, giving battery engineers a process choice matched to their cell format.
The patent’s main commercial argument is risk reduction. Developing a new lamination process in-house takes time and carries qualification uncertainty; using a pre-validated, patented technology lets engineering teams skip that work. ENNOVI supports co-development and tailored engineering engagement, which it says allows OEM partners to maintain control over their product roadmaps.
The technology was developed at ENNOVI’s Advanced Solutions Engineering Center in Neckarsulm, which includes prototyping, testing and R&D capabilities. The facility holds ISO 9001:2015 and TISAX certifications—the latter covering automotive supply chain data security requirements.
“Automotive OEMs and battery manufacturers can design in the unique features of adhesive-free lamination, reduce engineering risk by using a technology that is already validated, rather than reinventing it,” said Randy Tan, Product Portfolio Director for Energy Systems at ENNOVI.
AI spending is accelerating at a pace most enterprise budgets simply can’t match. While IT leaders are under pressure to deliver transformative AI capabilities, their capital budgets aren’t growing at the same rate as these AI ambitions. This mismatch is forcing difficult trade-offs: delayed projects, stretching aging infrastructure beyond its intended lifecycle, and diverting funding from other critical initiatives.
But there is another option. Increasingly, IT leaders are turning to technology leasing as a savvy strategy to help expedite AI adoption without sacrificing operational agility or financial liquidity.
AI: Thinking Through the Dollars and Sense
From my vantage point, working closely with IT leaders across industries, I hear the lament. AI infrastructure is expensive and highly concentrated, particularly GPU-based compute power. A single GPU cluster designed to support large-scale AI workloads can cost hundreds of thousands to millions. For enterprise-wide deployments, total data center investments can easily reach $150 million and as much as $500 million.
For mid-tier enterprises, challenges are even greater, as many lack the balance-sheet strength to secure traditional credit for such large capital expenditures. Some resort to private equity or high-interest lenders. But even those who can afford to purchase the infrastructure outright are frustrated by the pace of AI innovation; and the risk of technology becoming quickly outdated or obsolete.
For determined IT leaders, the question is not whether to invest in AI infrastructure, but how to fund it without compromising the broader IT roadmap. This is where the financing strategy becomes just as important as the technology strategy.
IT leasing eases these pressures in several critical ways:
Minimizing upfront costs. Traditional purchasing requires a massive outlay of capital, sometimes forcing companies to scale back or winnow down the scope of projects despite urgent demand. Leasing converts that one-time expense into predictable monthly payments. Instead of committing $50 million upfront, an organization can structure payments over time, freeing capital for additional initiatives and allowing multiple AI projects to move forward simultaneously.
Enhancing flexibility and reducing financial risk. Purchased technology sits on the balance sheet and depreciates over a fixed period. If business needs shift or the organization upgrades early, it can trigger book losses. Leasing – when structured properly – can classify equipment as an operating expense, keeping it off the balance sheet and enabling companies to pivot more easily without the burden of carrying these assets.
Lease the Entire AI Stack, Not Just the Hardware
IT leaders recognize today’s AI deployments extend far beyond servers. Enterprises are leasing high-performance GPU servers optimized for AI model training and inference, along with high-speed networking equipment, enterprise storage systems, integrated “rack and roll” data center solutions, firewalls, and AI-specific software.
Maintenance contracts, security tools, and embedded applications can all be incorporated into a single lease structure.
This bundling delivers administrative and compliance benefits. Hardware typically carries a residual value often 10–15% below purchase cost, amortized across the lease term. Software licenses and other “soft costs” are included in payments and expire at term end, eliminating resale complications. Clients are responsible only for the hardware at lease completion, simplifying compliance and ensuring security updates, patches, and licenses remain current throughout the lifecycle.
Combat Obsolescence Before It Becomes a Liability
One of the most common concerns I hear from executives is technology obsolescence. And given the pace of AI, where innovation cycles are measured in months, not years, that concern is justified.
Leasing naturally enforces a rigor and discipline for countering obsolescence. A three- or four-year term creates a defined decision point: extend, buy out or upgrade the technology. This prevents the “set it and forget it” ownership mindset that often leads to aging, unsupported systems and expensive, reactive refresh cycles. In AI environments, delaying upgrades can multiply total costs through inefficiencies and lost competitive advantage.
Leasing is a Budget Multiplier
Looking ahead to 2026 and beyond, IT leaders must think differently about capital allocation. No one can predict what the AI landscape will look like in three years. Owning large volumes of rapidly depreciating infrastructure can limit strategic agility.
Leaders must also factor in the full lifecycle cost of AI infrastructure, which includes equipment refreshes, secure data wiping, asset disposition, and regulatory compliance. These factors carry operational and financial burdens when assets are owned outright.
The most important priority today is building a strategy that enables AI adoption with minimal upfront cost and maximum flexibility. Leasing can act as a budget multiplier. Instead of exhausting capital on one large acquisition, organizations can deploy that same funding across predictable monthly payments, preserving liquidity while expanding total project capacity. In doing so, IT leaders maintain momentum across their complete technology roadmap, ensuring AI transformation doesn’t come at the expense of operational resilience.
# # #
About the Author
Frank Sommers brings 30 years of experience in the IT leasing industry, working closely with global enterprise organizations to help them modernize infrastructure while preserving capital and accelerating technology adoption. Known for consistently exceeding sales targets, Frank has also developed and led numerous successful vendor financing programs in partnership with major resellers, creating flexible acquisition models that support complex IT environments. His deep expertise in IT lifecycle management, financing strategies, and enterprise procurement has made him a trusted advisor across the industry. A former collegiate soccer player at Cal Poly San Luis Obispo, Frank brings the same competitiveness and teamwork to every client relationship.
The labor-intensive job of picking is one of the most critical roles in the warehouse in an era when companies live and die by their ability to get orders out the door quickly and accurately. This is true in B-to-B (business-to-business) environments as well as the increasingly demanding consumer market, where lightning-fast shipping is the norm.
But picking is a tough position to fill these days, according to recent industry studies on the state of warehouse labor. Aside from a general difficulty in finding warehouse help, hiring pick and pack workers was cited as the most difficult recruiting challenge in the industry by business leaders surveyed for the “2025 State of Warehouse Labor Report” from staffing company Instawork.
“Warehouse operators continue to highlight that finding and retaining hourly workers is a top concern,” the authors wrote in the October 2025 report. “Among the most difficult roles to staff are pick/pack workers, forklift operators, and shift leads. Most respondents reported turnover rates less than 10%; however a notable portion of survey respondents cited turnover rates between 10% and 25%, further underscoring the instability many facilities face in maintaining a reliable labor pool.”
More than 20% of the survey’s respondents listed picking as a challenging role to fill, compared to 16% who cited shift leads and 14% who cited forklift operators as hard to find (see Exhibit 1). Amid that pressure, many warehouses are seeking ways to make picking easier and workers more productive—all while maintaining accurate fulfillment metrics and a stable workforce. Here are three steps companies can take to meet those challenges head on.
1. EASE THE PHYSICAL DEMANDS OF THE JOB
David Barker, of supply chain technology provider Honeywell, says picking represents a specific set of challenges that make the job both critical to operations and hard to fill. First, picking is a high-cost area: More than half (55%) of a warehouse’s total operating costs can be attributed to picking, Barker says, citing Georgia Institute of Technology data. This puts a microscope on the picking function, which must also be efficient and precise. At the same time, the physical demands of the job can add up: Pickers often walk miles in a single shift, pushing heavy carts in an environment that can be “hot when it’s hot, and cold when it’s cold,” explains Barker, who is president of Honeywell PSS, the company’s productivity solutions and services business. Repetitive stress from physically reaching, lifting, and twisting to select items can take a toll as well.
Combined, these factors make the picking function ripe for intervention.
“[There is a] spectrum of skills required in the warehouse—and [picking] is a skilled operation,” Barker explains, referring to the precision required of the job and the difficulty of getting replacement staff up to speed in high-turnover situations. “It’s not easy to replace someone. Ramp-up time is required.
“This is an area where there is plenty of opportunity for improvement.”
Barker’s colleague Matt Sterner agrees and points to accelerating fulfillment and delivery demands as an added burden.
“With the continued growth in e-commerce, that continues to drive the need for faster throughput in the warehouse—and picking gets the most attention,” says Sterner, who is global customer marketing leader for transportation, logistics, and warehousing at Honeywell. “You have to get product picked and to the customer as [quickly as possible].”
Warehouse leaders can alleviate some of the physical stress on workers and boost productivity by optimizing facility layout and automating the picking process. A disorganized warehouse can cause excessive travel time, for one thing, so the first step is to analyze your layout to ensure a smooth flow throughout the building.
Ensure a logical flow from receiving to storage, then to picking, packing, and shipping areas to minimize backtracking.
Implement an ABC analysis, in which high-velocity “A” items are stored in the most accessible locations, closest to packing and shipping stations, to cut down on picker travel.
Regularly review and adjust your slotting strategy to adapt to changing demand patterns.
Think vertically. Maximizing vertical space with appropriate racking not only increases storage density but also makes more SKUs [stock-keeping units] accessible within a condensed footprint, further reducing travel.
Once you have an ideal layout, the next step is to start automating manual processes.
2. TRUST YOUR WORKERS WITH TECHNOLOGY
Technology comes into play in small and large ways to automate the picking function—from relatively easy-to-install voice-based picking solutions to collaborative picking robots and large-scale automated storage and retrieval systems (AS/RS). Barker and Sterner say voice technology is often the best place for companies to begin, noting that most companies see a 30% increase in productivity after implementing voice-directed picking systems—those in which employees receive picking instructions via a headset rather than having to read a printed list or handheld screen.
“People that use voice tend to really enjoy using voice,” Sterner explains. “It’s hands-free, eyes up—so you can focus on what you are doing. It keeps things moving and efficient. That’s always a strong play in the warehouse.”
These days, artificial intelligence (AI) plays a growing role as well. Sterner points out that AI tools can be integrated with warehouse technologies and used for inventory slotting and creating pick paths, based on the tools’ analysis and identification of “hot spots” in the warehouse. Agentic AI tools embedded into voice-directed picking technology can also help by answering pickers’ routine questions, like those involving procedure or protocol.
“[A picker] can ask the agentic AI a question: ‘How do I proceed?’ or ‘Hey, I see a stockout here; what should I do?’” Sterner explains. “[The worker] can ask that question, get an answer, and move on to the next pick [quickly]. It minimizes the disruption of going to ask someone.”
Companies that implement such technologies are likely to find themselves on the winning side of today’s recruiting and retention challenges, based on data from a separate industry survey that was also released late last year. Warehouse robotics company Exotec surveyed 400 U.S. warehouse workers and found that the vast majority embrace the idea of warehouse automation. Almost all of the respondents (98%) reported that automation makes them more productive, for instance, and nearly 70% said that automation-assisted tasks are more enjoyable than traditional, manual tasks. On top of that, nearly 60% reported a decrease in physical strain on their bodies thanks to automation.
That kind of job satisfaction makes workers stick around **ital{and} helps attracts more of them: The survey found that associates who work with automated equipment are more than three times as likely to stay at their job longer rather than leave early (36% versus 11%), for example, and that workers are nearly three times more likely to apply for jobs at warehouses with automation compared to those without (37% versus 13%).
Taking it a step further, Barker and Sterner note that workers can feel undervalued if they’re not being challenged or trusted with technology. Barker cites a Honeywell retail customer that conducted an internal survey of its warehouse workers, some of whom were given company-issued mobile devices as part of their jobs and some of whom were not. The latter group reported feeling less valued than their tech-enabled counterparts—which Barker says surprised both Honeywell and the retailer.
“It’s actually much more powerful than we thought it was. The fact that you award [an expensive] device to someone is very, very meaningful,” Barker says. “That sense of trust makes a difference. It’s a statement that we are investing in you.”
Automation can also lead to higher earnings. The Exotec survey found that nearly half of the workers surveyed (49%) had earned pay increases thanks to warehouse automation and 40% agreed that working with automated equipment increases the likelihood of getting a raise or promotion. Those benefits can help create a more stable workforce.
3. COMMUNICATE A CAREER PATH
Competitive pay rates are an effective recruiting and retention tool, to be sure, but they are not the only tools available—which is good news in an increasingly cost-conscious warehouse environment. The Instawork survey notes that warehouses must carefully balance the pressure to increase worker pay with the financial realities of rising costs for goods, transportation, and facility operations. The best way to do that is by exploring creative and cost-effective strategies to attract and retain talent, according to the survey. Those strategies could include offering flexible schedules, shift bonuses, or long-term career development opportunities.
“Balancing the needs of the workforce with the financial sustainability of the business will be essential for long-term success,” the authors wrote.
Barker and Sterner agree and emphasize the importance of demonstrating a clear growth path—for pickers as well as the broader warehouse workforce.
“Investing in training and development programs is essential,” Sterner says. “If [workers] don’t see a future path in the organization, that makes it difficult to bring them in and keep them.
“Help them grow in the position and show them a future path in the organization. Whenever workers feel supported and feel like there’s opportunity, they tend to stay. In the warehouse, that’s very important. Not everyone wants to come in and stay at the role that [they start with].”
Spain installed 1.14 GW of solar capacity for self-consumption in 2025, lifting cumulative capacity to 9.3 GW, as residential and commercial installations declined while industrial and off-grid segments showed greater resilience, according to data from the Spanish Photovoltaic Union.
Solar self-consumption capacity in Spain reached a cumulative 9.3 GW in 2025, according to data from the Spanish Photovoltaic Union (UNEF).
Spain added 1,139 MW of new self-consumption capacity during the year, representing a 3.7% slowdown compared with 2024. UNEF said the deceleration signals a phase of market stabilization following several years of rapid growth.
The residential segment accounted for 229 MW across 36,330 new installations, a year-on-year decline of 17%. UNEF attributed the contraction to the phase-out of tax incentives linked to energy-efficient home renovations and lower compensation for surplus electricity exported to the grid under deregulated market contracts.
UNEF said falling surplus compensation prices are reducing the attractiveness of oversized systems designed primarily for grid injection. As a result, demand is shifting toward installations optimized for instantaneous self-consumption. The association is calling for revisions to the simplified regulated compensation mechanism to enable broader settlement of surplus energy and improve economic signals for small-scale systems.
The commercial segment installed 176 MW in 2025, down 15% from the previous year. Collective self-consumption remains limited despite its potential to optimize shared generation and demand. Industry representatives said pending regulatory updates are needed to enable aggregated management models, dynamic energy allocation, and an expansion of eligible self-consumption areas.
Industrial self-consumption installations totaled 679 MW, marking a slight increase compared with 2024. UNEF said growth in this segment is being driven by larger medium-voltage systems aimed at reducing electricity costs and partially covering electrified thermal demand. Project viability increasingly depends on tariff structures with a higher variable component and more streamlined permitting for medium-sized installations.
Off-grid installations reached 55 MW in 2025, reflecting growing uptake of hybrid solar-plus-storage systems in rural areas and locations without grid access. Battery integration in grid-connected installations also continued to rise, improving controllability of generation and supporting system flexibility.
UNEF said Spain will need to deploy an average of around 2 GW of self-consumption capacity per year to meet the 19 GW target set out in the country’s National Integrated Energy and Climate Plan. Achieving that level will require regulatory stability, administrative simplification, and more effective integration of distributed energy storage.
54% of respondents cited “energy availability and redundancy” as the single greatest obstacle to successful data center development between now and 2030.
aw firm Foley & Lardner LLP released today its 2026 Data Center Development Report, focusing on the growth and challenges in the data center boom that aims to sustain the growth in AI and LLM usage.
A major focus was on energy, with 54% of respondents citing “energy availability and redundancy” as the single greatest obstacle to successful data center development between now and 2030.
Want to learn more about matching renewables with data center demand?
The event will spotlight how solar and energy storage solutions are driving sustainable and reliable infrastructure, with a particular focus on powering the country’s rapidly growing data center sector.
In terms of the right energy mix for data centers, 55% of respondents agreeing that the ideal energy mix to meet the growing power demand of data centers is largely renewables (41%), followed by natural gas (17%), nuclear (16%), and BESS (14%).
Nearly half (48%) of industry participants named advances in energy efficiency (which often includes storage optimization) as the greatest opportunity for development through the end of the decade, and nearly three in four respondents (74%) said advanced energy storage systems like batteries, hybrid solutions, and microgrids are the best way to ensure energy resilience.
Only 14% of developers are actually pursuing modular and small modular nuclear reactors as a viable energy opportunity.
Intriguingly, 63% anticipate a “strategic correction” in the market by 2030, driven by the intense competition for power, with one unnamed banking executive in the report saying, “Once power runs out in 2027 or 2028, that’s where we think deal flow will start to slow down.”
105 U.S.-based respondents were qualified to participate in the survey, including those who had direct experience in data center development, energy procurement, technology delivery, or operations within the past 24 months.
Daniel Farris, partner and co-lead of Foley’s data center and digital infrastructure team: “There is a Gold Rush mentality right now around securing power. That’s a big part of why people feel there’s a bubble,” said “There’s going to a period in the next two to three years where power at necessary levels is going to be really hard to come by.”
Rachel Conrad, senior counsel and co-lead of Foley’s data center and digital infrastructure team: “Over the next five to 10 years, power providers will need to either grow capacity or increase efficiency to meet the demand fueled by data centers.”
French researchers have developed a high-resolution computational framework to model microclimate effects of large floating solar PV systems, enabling accurate predictions of heat transfer, ambient temperatures, and water evaporation based on panel configuration and wind conditions. The model can inform thermal performance, environmental impacts, and optimize designs for utility-scale floating PV, as well as ground-mounted and agrivoltaic installations.
French researchers have developed a framework to model microclimate effects of large-sized floating PV systems.
The new model can be used to determine wind-dependent convective heat transfer coefficients (CHTC), ambient temperatures, and to estimate evaporation patterns in partially covered bodies of water based on a variety tilt angles, module heights, and pitch distances.
“The main novelty of this work lies in the numerical methodology we developed, specifically an upscaling method to quantify panel-atmosphere interactions at the module scale then model the micrometeorology at the power plant scale with a relatively fine resolution of about 4 meters,” Baptiste Amiot, corresponding author of the research told pv magazine, adding that the resolution is significantly higher than others in this field.
“Applying this methodology enables us to map the thermal performance across utility-scale installations and to provide insights into local environmental effects, such as evaporative losses,” he said.
The precursor model is geometrically adaptable: tt can handle various tilt angles, mounting heights, and inter-row spacings, according to Amiot. “It is particularly well-suited for large-scale installations exposed to sufficiently windy conditions,” Amiot added.
The researchers used a computational fluid dynamics (CFD) precursor model, a microclimate CFD model supporting the PV parameterization, and an experimental survey. A wind-tunnel setup typical of a land-based application was used to confirm accuracy of altitude-based wind profiles.
In addition, a geometrical layout of a commercial floating PV (FPV) installation was used for the atmosphere boundary layer parameters. The wind direction effects were assessed using the microclimate CFD model that reproduced the localized conditions of the commercial FPV array.
“The atmospheric component is fundamentally similar to regional climate models (RCMs) but deploying it within a CFD framework offers advantages in terms of surface element parameterization and the spatial discretization we can achieve,” said Amiot.
Some of the findings included temperature gradients range between 1.3 C/km and 5.8 C/km; headwinds and tailwinds relative to the front surface of the PV modules generate the greatest turbulence levels. Furthermore, the team was able investigate how turbulent flows influence water-saving gains based on PV coverage of the water surface.
Assessing the results, the researchers noted that the precursor method “readily determines” heat transfer coefficient correlations as a function of wind speed and direction. “This is essential to obtain the thermal U-values that govern panel cooling,” added Amiot.
The model can be extended to model large ground-mounted systems and agrivoltaics, including dynamic configurations where panels adjust orientation throughout the day, according to Amiot. It is suitable for inland and nearshore FPV, but not offshore FPV.
The researchers are currently focused on developing CFD models to predict both the energy output and environmental trade-offs of dual-use photovoltaics systems and FPV evaporation research at finer spatial scales, coupled with in-situ measurements. It is also working on an agrivoltaics CFD-plant model to predict crop response below PV canopies.
Renewables and storage could reliably power data centers, but success requires active grids, coordinated planning, and the right mix of technologies. Hitachi Energy CTO, Gerhard Salge, tells pv magazine that holistic approaches ensure technical feasibility, economic viability, and energy system resilience.
As data centers grow in size and complexity, supplying them with cheap and reliable power has never been more pressing. Gerhard Salge, chief technology officer (CTO) at Hitachi Energy, a unit of Japanese conglomerate Hitachi, shed light on the relationship between renewable energy and data center operations, noting that while technically feasible, success requires careful planning, the right infrastructure, and a holistic approach.
“When we look at what's happening in the grids, then renewables are an active element on the power generation side, and the data centers are an active element on the demand side,” Salge told pv magazine. “What you need in addition to that is in the dimensions of flexibility, for which we need storage and a grid that can actively act also here in order to bring all these elements together.”
Want to learn more about matching renewables with data center demand?
The event will spotlight how solar and energy storage solutions are driving sustainable and reliable infrastructure, with a particular focus on powering the country’s rapidly growing data center sector.
According to Salge, the key is active grids, not passive systems that simply react to conditions. With more renewables, changing demand patterns, new load centers, and storage options like batteries and existing facilities such as pumped hydro, it is crucial to coordinate these resources actively to maintain supply security, power quality, and cost optimization.
“But when you talk about the impact and the correlation between renewables and data centers, you need always to consider this full scope of the flexibility in a power system of all the elements—demand side, generation side, storage side, and the active grid in between,” he said, noting that weak or congested grids would not serve this purpose.
AI data centers
Salge warned that not all data centers are the same. “There are conventional data centers and AI data centers,” he said. “Conventional data centers are essentially high-load systems with some fluctuations on top. They contain many processors handling requests—from search engines or other applications—so the workload is distributed stochastically across them. This creates a baseline load with random ups and downs, which is the typical load pattern of a conventional data center.”
AI workloads, in contrast, rely heavily on GPUs or AI accelerators, which consume significant power continuously. Unlike conventional data centers, AI data centers often run at sustained high load, sometimes close to maximum capacity for long periods.
Htitachi Energy CTO Gerhard Salge
Image: Hitachi Energy
“AI data centers are specifically good in doing parallel computing,” Salge explained. “So many of them are triggered with the same demand pattern at the same time, which creates these spikes up and down in the demand profile, and they come in parallel all together.”
These fluctuations challenge both the power supply and the voltage and frequency quality of the connected grid. “So, you need to transport active power from an energy storage system or a supercapacitor to the demand of the AI data center. And that then needs to involve really the control of the data center’s active power. What you need is the interaction between the storage unit and then the AI data center to provide active power or to absorb it afterwards when the peak goes down. That can be also done by a supercapacitor.”
Batteries can store much more energy than supercapacitors, but the latter can ramp smaller energies more frequently. “However, if you put a battery that is smaller than the load, and you really need to cycle the battery through its full capacity, the battery will not survive very long with your data center, because the frequency of these bursts is so high, then you are aging the battery very, very quickly, yeah, so supercapacitors can do more cycles,” Salge emphasized.
He also noted that batteries and supercapacitors are both mature technologies, but the optimal setup—whether one, the other, or a combination with traditional capacitors—depends on storage size, number of racks, voltage levels, and overall system design.
Managing AI training bursts
Salge stressed the importance of complying with grid codes across geographies. “You need to become a good citizen to the power system,” he said. “You have to collaborate with local utilities to make sure that you are not infringing the grid codes and you are not disturbing with the data center back into the grid. A good way to do this, when renewables and data centers are co-located, is to manage renewable energy supply already inside the data center territory. Moreover, having a future-fit developed grid is a clear advantage. Because you have much more of these flexibility elements and the active elements to manage storage and renewable integration and to manage the dynamic loads of the data centers.”
If the grid is not future-fit with modern, actively operating equipment, operators will see significantly more stress. “With holistic planning, instead, you can even use some of the data center flexibility as a controllable and demand response kind of feature,” Salge said, adding that data center operators could coordinate AI training bursts to periods when the power system has more available capacity. This makes the data center a predictable, controllable demand, stressing the grid only when it is prepared.
“In conclusion, regarding technical feasibility: yes, it’s possible, but it requires the right configuration,” Salge said.
Economic feasibility
On economics, Salge believes solar and wind remain the cheapest power sources, even when accounting for the grid flexibility needed to integrate them with data centers. Solar is fastest to deploy, wind complements it well, and both can be scaled in parallel.
“Any increase in data center demand requires investment, whether from renewables or conventional power. Economics depend on the market, and market mechanisms, regulations, and technical grid planning are interconnected, influencing energy flow, pricing, and system stability,” he said.
“We recommend developers to work with all stakeholders—utilities, technology providers, and planners—from the start to ensure reliability, affordability, and social acceptance. Holistic planning avoids reactive fixes and leads to better long-term outcomes,” Salge concluded.
Researchers in Iraq have developed biomimetic leaf vein–inspired fins for photovoltaic panels, with reticulate (RET) venation reducing panel temperature by 33.6 C and boosting efficiency by 18% using passive cooling. Their study combines 3D CFD simulations and electrical evaluations to optimize fin geometry, offering a sustainable alternative to conventional cooling methods.
A research group from Iraq’s Al-Furat Al-Awsat Technical University has numerically investigated the thermal and electrical performance of PV panels integrated with leaf vein–inspired fins. They have simulated four types of venation used by plants, namely pinnate venation (PIN), reticulate venation (RET), parallel venation along the vertical axis (PAR-I), and parallel venation along the horizontal axis (PAR-II).
“The key novelty of our research lies in introducing and systematically optimizing biomimetic leaf vein–inspired fin geometries as passive heat sinks for photovoltaic panels,” corresponding author Yasser A. Jebbar told pv magazine. “While conventional cooling approaches rely on simple straight fins, fluids, or active systems, our study is among the first to directly translate natural leaf venation patterns—particularly RET structures—into manufacturable backside fins specifically tailored for PV thermal and electrical performance.”
The team combined detailed 3D computational fluid dynamics (CFD) modeling with electrical efficiency analysis to identify geometries that maximize heat dissipation without additional energy input or water consumption. Next steps include experimental validation of the leaf vein fin designs under real outdoor conditions, particularly in hot climates.
The simulated PV panel consisted of five layers: glass, two ethylene-vinyl acetate (EVA) layers, a solar cell layer, and a Tedlar layer, with a copper heat sink and fins attached. All fin configurations were initially 0.002 m thick, 0.03 m high, and spaced 0.05 m apart. Panels measured 0.5 m × 0.5 m, with a surrounding air velocity of 1.5 m/s and incident irradiance of 1,000 W/m².
RET fins outperformed all other designs, reducing operating temperature by 33.6 C and increasing electrical efficiency from 12.0% to 14.19% —an 18 % relative improvement—compared to uncooled panels.
“This temperature reduction rivals, and in some cases exceeds, water-based or hybrid cooling methods, despite relying solely on passive air cooling,” Jebbar noted. The study also highlighted the significant impact of fin height, more than spacing or thickness, on cooling performance.
The team further optimized the RET fins, varying spacing from 0.02–0.07 m, height from 0.02–0.07 m, and thickness from 0.002–0.007 m. The optimal geometry—0.03 m spacing, 0.05 m height, and 0.006 m thickness—achieved the maximum 33.6 C temperature reduction and 18% efficiency gain.
UNSW researchers identified a new damp-heat degradation mechanism in TOPCon modules with laser-fired contacts, driven primarily by rear-side recombination and open-circuit voltage loss rather than series-resistance increase. The study highlights that magnesium in white EVA encapsulants accelerates degradation, guiding improved encapsulant and backsheet selection for more reliable modules in humid environments.
A research team from the University of New South Wales (UNSW) has identifed a new damp heat-induced degradation pathway in TOPCon modules fabricated with laser-assisted fired contacts.
“Unlike earlier studies dominated by series-resistance increase, the primary degradation driver here is a reduction in open-circuit voltage, linked to enhanced rear-side recombination,” the research's lead author, Bram Hoex, told pv magazine. “The new degradation mechanism emerged under extended damp-heat (DH) exposure.”
The scientists conducted their analysis on 182 mm × 182 mm TOPCon cells fabricated in 2024 with laser-assisted firing.
The TOPCon solar cells employed a boron-doped p⁺ emitter, along with a front-side passivation stack consisting of unintentionally grown silicon dioxide (SiOₓ), aluminium oxide (Al₂O₃), and hydrogenated silicon nitride (SiNₓ:H), capped with a screen-printed H-pattern silver (Ag) contact grid. On the rear side, the structure comprised a SiO₂/phosphorus-doped n⁺ polycrystalline silicon/SiNₓ:H stack, also contacted by a screen-printed H-pattern Ag grid.
The researchers encapsulated the cells with different bill of materials (BOMs): two types of ethylene vinyl acetate (EVA); two types of polyolefin elastomer (POE); and one type of EVA-POE-EVA (EPE). They also used commercial coated polyethylene terephthalate (PET) composite (CPC) backsheets.
“The mini modules were laminated at 153 C for 8 min under standard industrial lamination conditions,” the academics explained. “All modules underwent DH test at 85 C and 85% relative humidity (RH) in an ASLi climate chamber for up to 2,000 h to study humidity-induced failures.
Schematic of the TOPCon solar cells and modules
Image: UNSW, Solar Energy Materials and Solar Cells, CC BY 4.0
The tests showed that maximum power losses ranged from 6% to 16%, with the difference among these values depending strongly on the encapsulation BOM.
“The modules with POE on both sides were the most stable at around 8%, while those using white EVA on the rear side, especially in combination with EPE, showed the largest losses at around 16%,” said Hoex. “The primary driver of the degradation was a reduction in open-circuit voltage rather than the increased series resistance after DH testing, which diverges from previous findings that predominantly attributed DH-induced degradation to metallisation corrosion.”
The research team explained that higher levels of degradation were attributable to additives containing magnesium (Mg) in white EVA, which migrate under DH, hydrate, and create an alkaline micro-environment. “This alkaline chemistry corrodes the rear SiNx passivation layer, increases interfacial hydrogen concentration, induces local pinhole-like defects, and raises dark saturation current, ultimately reducing open-circuit voltage,” Hoex emphasized.
The scientists also explained that, although Mg in white EVA encapsulants and its role in acetic acid–induced degradation was previously reported, the effect of MgO on performance degradation in TOPCon modules was not explicitly studied.
“We hope this work helps refine encapsulant and BOM selection strategies for next-generation TOPCon modules, particularly for humid-climate deployment,” Hoex concluded. “It provides clear guidance for controlling Mg content in rear encapsulants and optimising rear-side passivation robustness. The mechanistic insights from this study have already informed upstream design changes, substantially reducing risk in commercial modules.”
Fraunhofer IFAM has developed a dynamic impedance spectroscopy method for real-time battery diagnostics, allowing continuous monitoring during operation. This innovative approach enhances performance, safety, and lifespan by enabling precise, instant detection of internal issues and optimising charging processes. Its applications extend across electric vehicles, renewable energy, and critical power systems.
China’s cumulative power-sector energy storage capacity reached 213.3 GW by the end of 2025, up 54% year on year, according to data from the China Energy Storage Alliance (CNESA). Pumped hydro accounted for 31.3% of the total, while “new-type” energy storage made up 67.9% – around 144.7 GW.
Based on CNESA DataLink 2025 annual energy storage dataset, presented at a press conference in Beijing on Jan. 22, a total of 66.43 GW/189.48 GWh of new-type energy storage systems were commissioned in 2025.
The added power and energy scales increased 52% and 73% year on year, respectively, which CNESA linked to a continued shift toward longer-duration configurations, it reported the average duration rising to 2.58 hours in 2025 (from 2.11 hours in 2021).
CNESA said the leading application scenario has shifted toward standalone energy storage, which accounted for 58%, while user-side storage fell to 8% and thermal-plus-storage frequency regulation to 1.4%; “renewables-paired storage” was described as stable.
Geographically, CNESA reported that the top 10 provinces each exceeded 5 GWh of newly commissioned capacity and together represented about 90% of additions. Inner Mongolia ranked first by both power and energy capacity, and Yunnan entered the top 10 for the first time.
Lithium iron phosphate (LFP) batteries continued to dominate, with CNESA reporting over 98% of new-type installed capacity. CNESA also noted emerging deployments of sodium-ion, vanadium flow, compressed air, gravity storage, and hybrid systems, separately citing a 40 MW/40 MWh grid-forming sodium-ion project in Wenshan, Yunnan as an example.
On procurement, CNESA reported 690 energy storage system tenders (excluding centralized/framework procurement), down 10.4%, while EPC tenders rose to 1,536, up 4.5%. Winning bid volumes (excluding centralized/framework procurement) reached 121.5 GWh for systems and 206.3 GWh for EPC.
CNESA’s tender-price analysis for LFP systems (excluding user-side applications) reported a 2025 winning bid price range of CNY 391.14/kWh ($55/kWh) to CNY 913.00/kWh ($128/kWh). For EPC (excluding user-side), CNESA reported average winning bid prices of CNY 1,043.82/kWh ($146/kWh) for 2-hour projects and CNY 935.40/kWh ($131/kWh) for 4-hour projects.
CNESA also launched a policy “map” for standalone storage market mechanisms covering 21 provinces.
In Short : Indian researchers have developed a self-charging solar energy storage device that integrates energy harvesting and storage into one unit. Designed as a photo-supercapacitor, the system captures sunlight and stores power simultaneously, eliminating the need for separate solar panels and batteries. The technology promises efficient, low-cost solutions for portable and off-grid energy needs.
In Detail : An innovative sunlight-powered supercapacitor called photo-capacitor developed by scientists can both capture and store solar energy in a single integrated device.
This could be a remarkable step towards clean and self-sustaining energy storage systems paving the way for efficient, low cost, and eco-friendly power solutions for portable, wearable, and off grid technologies.
Traditionally, solar energy systems rely on two separate units: solar panels for energy capture and batteries or supercapacitors for energy storage. While such hybrid systems are widely implemented from large-scale solar farms to portable electronics, they rely on additional power management electronics to regulate voltage and current mismatches between the energy harvester and the storage unit. This requirement increases system complexity, cost, energy losses, and device footprint, which becomes particularly detrimental for miniaturised and autonomous devices.
This new photo-rechargeable supercapacitor, developed by the Centre for Nano and Soft Matter Sciences (CeNS), Bengaluru, an autonomous institute under the Department of Science and Technology (DST), Government of India. seamlessly combined both processes converting sunlight into electrical energy and storing that energy for later, thus simplifying design and minimising energy loss during conversion and storage.
Under the guidance of Dr. Kavita Pandey, innovated with the help of binder-free use of nickel-cobalt oxide (NiCo2O4) nanowires, which have been uniformly grown on nickel foam using a simple in situ hydrothermal process.
These nanowires, only a few nanometres in diameter and several micrometres long, form a highly porous and conductive 3D network that efficiently absorbs sunlight and stores electrical charge. This unique architecture allowed the material to act simultaneously as a solar energy harvester and a supercapacitor electrode.
When tested, the NiCo2O4 electrode exhibited a remarkable 54% increase in capacitance under illumination, rising from 570 to 880 mF cm-2 at a current density of 15 mA cm-2. This exceptional performance stems from the efficient generation and transfer of light-induced charge carriers within the nanowire network. Even after 10,000 charge-discharge cycles, the electrode retained 85% of its original capacity, demonstrating its long-term stability, an essential feature for practical applications.
To evaluate its real-world applicability, the researchers prepared an asymmetric photo-supercapacitor using activated carbon as the negative electrode and NiCo2O4 nanowires as the positive electrode. The device delivered a stable output voltage of 1.2 volts, maintained 88% of its capacitance retention even after 1,000 photo-charging cycles, and operated efficiently under varying sunlight conditions-from low indoor illumination to intense 2 sun intensity. This stability indicates that the nanowire structure can endure both mechanical and electrochemical stress over extended periods of use.
By integrating sunlight harvesting and energy storage in a single device, the team developed self-charging power systems that can function anywhere even in remote regions without access to an electrical grid.
Such technology can substantially reduce dependence on fossil fuels and conventional batteries, paving the way for a sustainable and green energy future. In addition to the experimental, theoretical study was carried out to understand why the NiCo2O4 nanowire system performs so efficiently.
This study revealed that nickel substitution in the cobalt oxide framework narrows the band gap to approximately 1.67 eV and induces half metallic behavior. This means the material behaves as a semiconductor for one type of electron spin while remaining metallic for the other: a rare dual property that enables faster charge transport and higher electrical conductivity. Such spin dependent conductivity is particularly valuable for photo-assisted charge storage applications.
Integrating sunlight capture and charge storage in a single architecture has been a long-standing goal in sustainable energy research.
This study also demonstrates the synergy between experimental and theoretical insights in materials research. While experiments confirmed enhanced capacitance and durability, theoretical simulations revealed the atomic-level mechanisms driving these improvements. Together, they provide a comprehensive understanding of how nanostructured materials can be optimized for light-responsive energy storage.
This work, published in Sustainable Energy & Fuels (Royal Society of Chemistry Journal), introduces a new class of smart, photo-rechargeable energy storage devices. Overall, this research represents a paradigm shift in renewable energy storage. With further development, such systems could play a pivotal role in achieving India’s clean energy ambitions and inspiring similar innovations worldwide.
Manufacturing and battery technology advisory firm XC Technology has signed a strategic collaboration with Photon Automation to support the latter’s new subsidiary, Photon Energy, focusing on offering turn-key energy storage system (ESS) contract manufacturing services.
Photon Energy will leverage the collaboration to provide a complete suite of services, from design support and prototyping to full-scale production and quality assurance for various energy storage applications. That includes providing manufacturing solutions for a range of portable, grid and industrial ESS products.
Precision laser welding applications will use Photon Automation’s specialized capabilities for critical welding processes in ESS components. Meanwhile, battery production and optimization will leverage XC Technology’s battery process experience for performance and safety optimization for next-generation energy systems.
“XC Technology’s experience in optimizing production for complex battery technologies and turnkey assemblies, combined with Photon Automation’s turnkey systems build and integration, creates a powerful offering for the market,” said Ben Wrightsman, founder of XC Technology.
The Finnish start-up says its sand battery technology is scalable from 20 to 500 MWh with charging power from 1 to 20 MW, depending on industrial needs.
Finnish cleantech startup TheStorage says that its thermal storage technology could reduce industrial energy costs by up to 70% and cut carbon emissions by as much as 90%. The system converts renewable electricity into heat, stores it in sand, and delivers it on-demand for industrial heating.
The concept emerged in Finland in 2023, with engineering work beginning in 2024. In January 2026, TheStorage installed its first industrial-scale pilot at a brewery, putting the technology to the test in a real-world setting. There, it produces fossil-free steam for the brewery’s production lines.
“Producing steam without fossil fuels is a major step toward carbon-neutral production,” says Vesa Peltola, Production Director of the brewery.
TheStorage’s technology captures electricity when it is abundant and inexpensive, converts it into high-temperature heat, and stores it in sand. This stored heat can later be used in industrial processes independently of real-time electricity availability.
To continue reading, please visit our ESS News website.
Scientists have grown organic romaine lettuce under 13 different types of PV modules, in an unusual hot Canadian summer. Their analysis showed lettuce yields increased by over 400% compared to unshaded control plants.
A research group from Canada’s Western University has investigated the performance of organic romaine lettuce, a heat-sensitive crop, under a broad range of agrivoltaic conditions. The test was conducted in London, Ontario, in the summer of 2025, during which 18 days had temperatures over 30 C.
“Our study explores how agrivoltaic systems can be tailored to optimize crop growth, especially under extreme heat conditions, while contributing to sustainable energy generation,” corresponding researcher Uzair Jamil told pv magazine.
“This becomes especially relevant in the context of climate change, where we are experiencing temperature extremes across the world,” Jamil added. “We examined the performance of organic romaine lettuce under thirteen different agrivoltaic configurations – ranging from crystalline silicon PV to thin-film-colored modules (red, blue, green) – in outdoor, high-temperature stress conditions.”
More specifically, the experiment included c-Si modules with 8%, 44% and 69% transparency rate; blue c-Si modules with transparency of 60%, 70%, and 80%; green c-Si modules with transparency of 60%, 70%, and 80%; and red c-Si modules with transparency of of 40%, 50%, 70%, and 80%.
All agrivoltaics installations had a leading-edge height of 2.0 m and a trailing-edge height of 2.8 m, and the modules were oriented southwards at 34◦. Pots with organic romaine lettuce were placed under all configurations, along with three pots fully exposed to ambient sunlight without shading, used as controls.
In addition to measurements against the control, the scientific group has compared the results to the national average per-pot yield for 2022, which included less high-temperature days and was therefore considered typical. Those data points were taken from agricultural census data, which later enabled the researcher also to create nationwide projections of their results.
“Lettuce yields increased by over 400% compared to unshaded control plants, and 200% relative to national average yields,” Jamil said about the results. “60% transparent blue Cd-Te and 44% transparent crystalline silicon PV modules delivered the highest productivity gains, demonstrating the importance of both shading intensity and spectral quality in boosting plant growth.”
Jamil further added that if agrivoltaic were to scale up to protect Canada’s entire lettuce crop, they could add 392,000 tonnes of lettuce.
“That translates into CAD $62.9 billion (USD $46.6 billion) in revenue over 25 years,” he said. “If scaled across Canada, agrivoltaics could also reduce 6.4 million tonnes of CO2 emissions over 25 years, making it a key player in reducing the agricultural sector’s environmental footprint.”