Normal view

Received today — 6 April 2026

All emerging cyber threats targeting power infrastructure at a glance

Researchers in Moroco analyzed cybersecurity challenges in smart grids, highlighting AI-driven detection and defense strategies against threats like distributed denial-of-service, false data injection replay, and IoT-based attacks. They recommend multi-layered protections, real-time anomaly detection, secure IoT devices, and staff training to enhance resilience and safeguard power system operations.

Researchers at Morocco's Higher School of Technology, Moulay Ismail University, have conducted a comprehensive analysis of emerging cybersecurity challenges in power systems and detailed recent advances in detection and defense strategies.

Their work emphasizes the growing role of AI in enhancing control, protection, and resilience in modern smart grids. It also classifies cyber threats by origin, impact, and affected system layers to provide a structured understanding and reviews machine learning and optimization-based intrusion detection systems (IDSs) for power systems.

The researchers highlighted that renewable smart grids face diverse cyber threats that can disrupt operations and compromise data. Distributed denial-of-service (DDoS) attacks, for example, flood networks with traffic, blocking legitimate access and delaying control actions, while data integrity attacks manipulate sensor or control data, causing incorrect decisions or blackouts.

Additionally, replay attacks retransmit intercepted data to confuse the system, and false data injection attacks subtly alter real-time data to mimic normal operations while disrupting the grid. Covert attacks inject hidden signals that manipulate system behavior without detection, whereas IoT device-based attacks exploit vulnerabilities in meters or sensors to spread malware, steal data, or launch DoS attacks.

Finally, zero dynamics attacks leverage system models to generate hidden signals that leave output measurements unchanged but affect operations, posing sophisticated stealth threats to smart grid security.

 Do you want to strengthen and enhance the cyber security of your solar energy assets to safeguard them against emerging threats?

Join us on Apr. 29 for pv magazine Webinar+ | Decoding the first massive cyberattack on Europe’s solar energy infrastructure – The Poland case and lessons learned

The researchers warned that while smart grids have improved energy efficiency and flexibility through advanced communication tools and distributed energy sources, they have also introduced new cyber vulnerabilities. Threats such as phishing, malware, denial-of-service (DoS) attacks, and false data injection (FDI) can disrupt operations, compromise data, and damage infrastructure.

They recommend implementing defense strategies that maintain confidentiality, integrity, and availability, while also incorporating authentication, authorization, privacy, and reliability. Machine learning and data-driven intrusion detection systems can help identify anomalies and detect FDI attacks in real time, particularly in smart grids and industrial control systems such as SCADA, which rely on accurate sensor measurements for state estimation.

The research team also encouraged energy asset owners and grid operators to adopt substation security measures and protocol vulnerability analyses to detect risks at the hardware and network levels. Blockchain, distributed ledgers, and Hilbert-Huang transform methods are highlighted as tools to further strengthen cybersecurity.

IoT devices, including sensors and smart meters, should be secured with strong authentication, safe boot procedures, frequent firmware updates, and standardized security across manufacturers. Sensitive grid data should be protected using techniques such as homomorphic encryption to maintain confidentiality during storage and transmission.

“A multi-tiered security approach that includes firewalls, intrusion detection systems, and network segmentation can enhance grid resilience. Extracting critical elements from vulnerable IoT devices and leveraging redundant control channels ensures operational continuity during attacks,” the researchers stated.

Machine learning and anomaly detection systems should be deployed to enable real-time identification of irregular activities, including FDI and malware propagation. Standardized protocols and rapid incident response measures should also support collaboration among grid operators, IoT manufacturers, and regulators, facilitated by information-sharing platforms.

The researchers emphasize that human-centered attacks, including phishing and social engineering, remain significant threats, but these can be mitigated through regular staff and user training.

The review was presented in “Cybersecurity challenges and defense strategies for next-generation power systems,” published in Cyber-Physical Energy Systems.

 

 

New intrusion detection systems boost protection of SCADA systems against cyber threats

An international reserch team developed two deep learning-based IDS models to enhance cybersecurity in SCADA systems. The hybrid approach reportedly improves detection of complex and novel cyber threats with high accuracy, adaptability, and efficiency, outperforming traditional methods across multiple datasets.

A Saudi-British research team has develeped two new deep learning-based intrusion detection systems (IDSs) that can reportedly improve the cybersecurity of SCADA networks.

In large-scale solar power plants, SCADA systems play a vital role by overseeing energy generation, monitoring the performance of solar panels, optimizing output, identifying potential faults, and maintaining smooth overall operations. In essence, they act as the central system that converts raw solar data into practical control decisions, ensuring the plant operates safely, efficiently, and profitably.

The scientists explaind that current cybersecurity frameworks are often inadequate for SCADA systems because they cannot fully cope with the complexity and constantly evolving nature of modern cyber threats. Most existing approaches rely on signature-based detection, which depends on prior knowledge of attack patterns and therefore fails to detect zero-day exploits or novel intrusion techniques.

To address this limitation, the researchers considered deep learning methods, as these techniques allows to process large volumes of data, identify complex patterns, and enable more proactive threat detection.

“Such capability of handling and analyzing big data is particularly useful during scenarios when SCADA systems are generating huge streams of real-time data, including sensor readings, control commands, and other system logs,” they explained. “Furthermore, deep learning methods, especially convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have shown outstanding performances in the detection of complex attack scenarios with sequential or spatial patterns in data.”

 Do you want to strengthen and enhance the cyber security of your solar energy assets to safeguard them against emerging threats?

Join us on Apr. 29 for pv magazine Webinar+ | Decoding the first massive cyberattack on Europe’s solar energy infrastructure – The Poland case and lessons learned

Industry experts will explore real-world cyberattack scenarios, highlight potential vulnerabilities in solar and storage systems, and share practical, actionable strategies to protect your energy assets. Attendees will gain valuable knowledge on how to anticipate, prevent, and respond to cyber threats in the rapidly evolving solar energy sector.

The proposed approach integrates two new IDSs, named the Spike Encoding Adaptive Regulation Kernel (SPARK) and the Scented Alpine Descent (SAD) algorithm. By leveraging their complementary strengths, the method reportedly improves spike-threshold accuracy while enhancing adaptability and robustness under dynamic conditions.

The SPARK model introduces adaptive spike encoding by dynamically adjusting thresholds based on input signal characteristics. It uses advanced statistical methods to respond to variations in neural input, improving sensitivity to changes in intensity and frequency. By integrating both temporal and spatial features, SPARK enhances information encoding, especially for complex datasets. Unlike traditional fixed-threshold methods, it provides context-aware thresholding, improving accuracy and reliability.

The SAD algorithm complements SPARK by offering an optimization strategy inspired by olfactory navigation, which is the process by which animals and organisms use odor cues to locate food, mates, or home, and Lévy flight behavior, which is a strategy obeserved in many animal species to randomly search for a target in an unknown environment. This purportedly enables efficient exploration of solution spaces and avoids local minima, ensuring optimal threshold selection.

The hybrid approach can dynamically adjust and optimize spike thresholds simultaneously, surpassing conventional static or isolated approaches, according to scientists, which noted that the SPARK model is well-suited for SCADA and IoT systems due to its scalability, real-time adaptability, and efficient data handling. Additionally, its lightweight design reduces computational overhead and false positives, making it effective for resource-constrained environments.

“SAD is complementary to SPARK in the sense that it focuses on improving the detection accuracy while maintaining computational efficiency,” the researchers emphasized. “SAD's anomaly scoring mechanism can be integrated into this framework to add another layer of detection, which can run parallel with SPARK. In effect, integrating the deep learning models into the scoring mechanism means that SAD would enable a much more fine-grained analysis of attack patterns with little noticeable impact on performance for the SCADA system in question.”

The researchers used multiple benchmark datasets are used to evaluate SCADA intrusion detection performance, including the Secure Water Treatment (SWaT) testbed, Gas Pipeline, WUSTL-IIoT, and Electra. These datasets capture diverse industrial environments, attack types, and operational conditions, enabling comprehensive testing. They also include time-series sensor data, actuator commands, and labeled attack scenarios such as denial-of-service (DoS), distributed denial-of-service (DDoS), malware, and injection attacks.

The diversity of datasets ensured accurate modeling of both normal behavior and complex anomalies in SCADA and IIoT systems, according to the research team. Standardized preprocessing, training, and evaluation procedures also enabled comparison across all tested models. Cross-validation and controlled training conditions, meanwhile, reportedly prevented bias and ensured reliable generalization results. Visualization tools such as histograms, loss curves, and confusion matrices provided insights into model behavior and anomaly detection.

The SPARK model was found to consistently demonstrate “superior” performance, achieving high accuracy, precision, and recall across datasets. It outperformed traditional machine learning and deep learning approaches in detecting diverse intrusion types.

“The findings underline, in summary, that the SPARK and SAD models are basically the final frontier in modern intrusion detection,” the scientists said. “Distinctly designed to provide improved detection capabilities and operational efficiency, the two designs also chart a way into more resilient and intelligent security solutions for modern industrial controlled systems (ICSs) and Internet-of-Things (IoT) networks.”

The novel IDSs have been presented in “SPARK and SAD: Leading-edge deep learning frameworks for robust and effective intrusion detection in SCADA systems,” published in the International Journal of Critical Infrastructure Protection. The research team comprised academics form the Leeds Beckett University in the United Kingdom and King Abdulaziz University in Saudi Arabia. 

Spin-flip emitters could control energy pathways in singlet fission solar cells

Japanese researchers developed a molybdenum-based spin-flip emitter that efficiently harvests triplet excitons from singlet-fission tetracene dimers, producing strong near-infrared emission. This approach could boost solar cell efficiency and enable new quantum technologies by converting otherwise “dark” excitons into usable light.

A research team at Kyushu University in Japan has reported a breakthrough that could steer photovoltaic technology past long‑standing efficiency barriers by harnessing a quantum process known as singlet fission (SF).

Singlet exciton fission is an effect seen in certain materials whereby a single photon can generate two electron-hole pairs as it is absorbed into a solar cell rather than the usual one. The effect has been observed by scientists as far back as the 1970s and though it has become an important area of research for some of the world’s leading institutes over the past decade; translating the effect into a viable solar cell has proved complex.

Singlet fission solar cells can produce two electrons from one photon, making the cell more efficient. This happens through a quantum mechanical process where one singlet exciton (an electron-hole pair) is split into two triplet excitons. By pairing SF with a specially designed spin‑flip molybdenum‑based complex, the scientists demonstrated energy conversion and harvesting in solution with an effective quantum yield of around 130%.

“The applications of this work in solar cells will require integrating singlet-fission (SF) materials with spin-flip emitters in solid-state systems,” Nobuo Kimizuka, lead author of the study, told pv magazine. “As fundamental research, our first step is to develop high-efficiency SF and spin-flip emitters with well-controlled energy levels and luminescence quantum yields in solid-state environments, and then evaluate the performance of these integrated systems.”

“We are actively working on building a higher-performance solid-state system,” he added. “Achieving robust performance in solid-state solar cells remains a challenge, but we expect the efficiency to surpass that of conventional SF technology alone. This approach, which multiplies photons and converts otherwise ‘dark’ triplet excitons into light, could open the door to new quantum technologies such as quantum sensors and exciton circuits, while also contributing to the design of next-generation quantum materials.”

The team developed a molybdenum-based spin-flip emitter that selectively captures the energy of triplet excitons before they dissipate. Its molecular design allows electron spin to flip during near-infrared (NIR) light absorption or emission, enabling more efficient harvesting of the multiple excitons generated by singlet fission.

Further analysis showed that sensitization efficiency depends heavily on the structure of the linker connecting tetracene units. The linker dictates not only the spatial arrangement and electronic coupling of the chromophores but also the exchange interaction within the correlated triplet pair. Variations in linker length, rigidity, and conjugation can significantly affect the rate and yield of triplet energy transfer to the spin-flip emitter, influencing both efficiency and the dynamics of the singlet fission process.

“The methodology we developed for assessing doublet yields provides a practical way to estimate triplet yields of SF dimers, even in systems with complex energy-transfer pathways involving both correlated and free triplets,” Kimizuka explained. “Reducing losses from correlated triplet-pair recombination requires either rapid separation into long-lived multiexcitons or faster triplet transfer to an acceptor molecule, achievable through careful energy-level design in oligomers or solid-state structures.”

“With a versatile selection of central metals, including chromium, molybdenum, and vanadium, and tunable ligands informed by Tanabe–Sugano diagrams and ligand-field theory, spin-flip emitters show strong potential as NIR-emitting materials for efficient triplet extraction, especially with recent advances in air-stable designs,” he added.

The interface design will be critical for converting triplet excitons generated by tetracene singlet fission into charge carriers on the silicon solar cell surface. “In SF-sensitized silicon cells, one major source of energy loss is transfer from the SF molecule to silicon via its excited singlet state,” Kimizuka noted. “Our proof-of-concept method blocks these loss pathways, enabling selective extraction of the excited triplet states originating from singlet fission.”

The research findings are available in the study “Exploring Spin-State Selective Harvesting Pathways from Singlet Fission Dimers to a Near-Infrared-Emissive Spin-Flip Emitter,” published in the Journal of the Chemical American Society.

TNO unveils 12.4%-efficient perovksite solar tile

The Dutch research institute has presented what it describes as the world’s first perovskite-based roof tile, achieving up to 13.8% efficiency on standalone modules and 12.4% when installed on a curved surface. The flexible modules were produced using TNO’s experimental roll-to-roll platform,

The Netherlands Organization for Applied Scientific Research (TNO) has unveiled today a building-integrated photovoltaic (BIPV) tile based on perovskite solar cell technology.

The new product is billed as the world's first perovskite solar tile.

“This demonstrator is supported by the Province of North Brabant through the project ‘Solar manufacturing industry to Brabant, Solliance 2.0’. Additional funding was received from the European Union’s Horizon Europe programme for the Luminosity project,” TNO said in a statement. “The work was also partly funded by the National Growth Fund programme SolarNL.”

The Dutch research institute partnered with Netherlands-based BIPV specialist Asat BV in deploying 10 cm x 10 cm perovskite solar modules built on flexible foil onto a curved composite roof tile. Testing indicates that bending the modules to fit the curved surface has minimal impact on their performance.

Standalone modules reached energy conversion efficiencies of up to 13.8%, while the modules retained an efficiency of 12.4% after installation on the curved roof tile.

The experimental production line used to encapsulate the solar tiles

Image: TNO

The perovksite modules were encapsulated with an experimental roll-to-roll manufacturing platform developed by TNO itself. Roll-to-roll manufacturing – similar to the process used in newspaper printing – enables continuous production of solar cells on long rolls of flexible material. The technique is widely seen as a potential pathway to lower production costs and high-volume manufacturing for emerging thin-film technologies such as perovskites.

More technical details about the solar tile were not disclosed. TNO said it will be commercialized by its spinoff Perovion Technologies, which was launched last month. 

TNO's recent research on perovskite solar cells, includes developing roll-to-roll and spatial atomic layer deposition (SALD) processes for the deposition of functional materials, solar cell layers, and flexible foils.

In July, Solarge, a manufacturer of lightweight silicon PV modules based in the Netherlands, and TNO unveiled a 32 cm x 34 cm lightweight prototype perovskite solar panel.

A month earlier, Japan’s Sekisui Solar Film, part of Sekisui Chemical, the Brabant Development Agency (BOM), which serves the Dutch province of Noord-Brabant, and TNO signed a letter of intent in Osaka, Japan to explore collaboration related to flexible perovskite solar PV module technologies.

As pv magazine has reported, Sekisui Solar Film is developing technology for lightweight, flexible perovskite solar module manufacturing using an advanced roll-to-roll process. It is working on a 100 MW plant in Japan for large-scale production, is undertaking field demonstrations, and signed a perovskite solar-related memorandum of understanding with Slovakia.

 

Received before yesterday

Silver price drops sharply, falls back below $80 an ounce

2 February 2026 at 12:23

After hitting an all-time high of $121.65/oz on Jan. 29, silver prices have tumbled to $79.44/oz, with analysts warning of a potential drop toward $50/oz.

After reaching an all-time high of $$121.65 per ounce (oz) on Jan. 29, silver prices have fallen sharply in recent days, dropping to $79.44/oz this morning.

The downturn had been anticipated by two analysts interviewed by pv magazine on Jan. 27, who warned that the steep rally seen in previous weeks could reverse abruptly in the days ahead.

One of the two analysts, Mike McGlone, senior commodity strategist at Bloomberg Intelligence, said the price could stabilize around $50/oz, although he did not provide a timeframe for when this new trend might materialize.

“Reversion toward $50 appears as a normal path for the commodity known as the ‘devil's metal' due to its volatility,” he told pv magazine.

Rhona O’Connell, head of market analysis for EMEA and Asia at StoneX, said on Jan. 27 that investors might soon rethink their rush into silver. She explained that speculative buying had pushed the metal into risky territory, making prices vulnerable to a sharp correction. O’Connell also said fears of potential U.S. tariffs fueled the recent rally, swelling COMEX inventories as metal flowed into the U.S. Further gains are unlikely, she added, dismissing even $100/oz as unsustainable and warning of a potentially severe price reversal.

Silver prices surged by approximately 130% in the past six months and around 243% over the past year. The average silver price was $28.27/oz in 2024, $23.38/oz in 2023, and $21.80/oz in 2022.

Solar-plus-storage for data centers: not a simple switch

2 February 2026 at 11:18

Renewables and storage could reliably power data centers, but success requires active grids, coordinated planning, and the right mix of technologies. Hitachi Energy CTO, Gerhard Salge, tells pv magazine that holistic approaches ensure technical feasibility, economic viability, and energy system resilience.

As data centers grow in size and complexity, supplying them with cheap and reliable power has never been more pressing. Gerhard Salge, chief technology officer (CTO) at Hitachi Energy, a unit of Japanese conglomerate Hitachi, shed light on the relationship between renewable energy and data center operations, noting that while technically feasible, success requires careful planning, the right infrastructure, and a holistic approach.

“When we look at what's happening in the grids, then renewables are an active element on the power generation side, and the data centers are an active element on the demand side,” Salge told pv magazine. “What you need in addition to that is in the dimensions of flexibility, for which we need storage and a grid that can actively act also here in order to bring all these elements together.”

Want to learn more about matching renewables with data center demand?

Join us on April 22 for the 3rd SunRise Arabia Clean Energy Conference in Riyadh.

The event will spotlight how solar and energy storage solutions are driving sustainable and reliable infrastructure, with a particular focus on powering the country’s rapidly growing data center sector.

According to Salge, the key is active grids, not passive systems that simply react to conditions. With more renewables, changing demand patterns, new load centers, and storage options like batteries and existing facilities such as pumped hydro, it is crucial to coordinate these resources actively to maintain supply security, power quality, and cost optimization.

“But when you talk about the impact and the correlation between renewables and data centers, you need always to consider this full scope of the flexibility in a power system of all the elements—demand side, generation side, storage side, and the active grid in between,” he said, noting that weak or congested grids would not serve this purpose.

AI data centers

Salge warned that not all data centers are the same. “There are conventional data centers and AI data centers,” he said. “Conventional data centers are essentially high-load systems with some fluctuations on top. They contain many processors handling requests—from search engines or other applications—so the workload is distributed stochastically across them. This creates a baseline load with random ups and downs, which is the typical load pattern of a conventional data center.”

AI workloads, in contrast, rely heavily on GPUs or AI accelerators, which consume significant power continuously. Unlike conventional data centers, AI data centers often run at sustained high load, sometimes close to maximum capacity for long periods.

Htitachi Energy CTO Gerhard Salge

Image: Hitachi Energy

“AI data centers are specifically good in doing parallel computing,” Salge explained. “So many of them are triggered with the same demand pattern at the same time, which creates these spikes up and down in the demand profile, and they come in parallel all together.”

These fluctuations challenge both the power supply and the voltage and frequency quality of the connected grid. “So, you need to transport active power from an energy storage system or a supercapacitor to the demand of the AI data center. And that then needs to involve really the control of the data center’s active power. What you need is the interaction between the storage unit and then the AI data center to provide active power or to absorb it afterwards when the peak goes down. That can be also done by a supercapacitor.”

Batteries can store much more energy than supercapacitors, but the latter can ramp smaller energies more frequently. “However, if you put a battery that is smaller than the load, and you really need to cycle the battery through its full capacity, the battery will not survive very long with your data center, because the frequency of these bursts is so high, then you are aging the battery very, very quickly, yeah, so supercapacitors can do more cycles,” Salge emphasized.

He also noted that batteries and supercapacitors are both mature technologies, but the optimal setup—whether one, the other, or a combination with traditional capacitors—depends on storage size, number of racks, voltage levels, and overall system design.

Managing AI training bursts

Salge stressed the importance of complying with grid codes across geographies. “You need to become a good citizen to the power system,” he said. “You have to collaborate with local utilities to make sure that you are not infringing the grid codes and you are not disturbing with the data center back into the grid. A good way to do this, when renewables and data centers are co-located, is to manage renewable energy supply already inside the data center territory. Moreover, having a future-fit developed grid is a clear advantage. Because you have much more of these flexibility elements and the active elements to manage storage and renewable integration and to manage the dynamic loads of the data centers.”

If the grid is not future-fit with modern, actively operating equipment, operators will see significantly more stress. “With holistic planning, instead, you can even use some of the data center flexibility as a controllable and demand response kind of feature,” Salge said, adding that data center operators could coordinate AI training bursts to periods when the power system has more available capacity. This makes the data center a predictable, controllable demand, stressing the grid only when it is prepared.

“In conclusion, regarding technical feasibility: yes, it’s possible, but it requires the right configuration,” Salge said.

Economic feasibility

On economics, Salge believes solar and wind remain the cheapest power sources, even when accounting for the grid flexibility needed to integrate them with data centers. Solar is fastest to deploy, wind complements it well, and both can be scaled in parallel.

“Any increase in data center demand requires investment, whether from renewables or conventional power. Economics depend on the market, and market mechanisms, regulations, and technical grid planning are interconnected, influencing energy flow, pricing, and system stability,” he said.

“We recommend developers to work with all stakeholders—utilities, technology providers, and planners—from the start to ensure reliability, affordability, and social acceptance. Holistic planning avoids reactive fixes and leads to better long-term outcomes,” Salge concluded.

UNSW researchers identify new damp heat-induced failure mechanism in TOPCon solar modules

2 February 2026 at 07:32

UNSW researchers identified a new damp-heat degradation mechanism in TOPCon modules with laser-fired contacts, driven primarily by rear-side recombination and open-circuit voltage loss rather than series-resistance increase. The study highlights that magnesium in white EVA encapsulants accelerates degradation, guiding improved encapsulant and backsheet selection for more reliable modules in humid environments.

A research team from the University of New South Wales (UNSW) has identifed a new damp heat-induced degradation pathway in TOPCon modules fabricated with laser-assisted fired contacts.

“Unlike earlier studies dominated by series-resistance increase, the primary degradation driver here is a reduction in open-circuit voltage, linked to enhanced rear-side recombination,” the research's lead author, Bram Hoex, told pv magazine. “The new degradation mechanism emerged under extended damp-heat (DH) exposure.”

The scientists conducted their analysis on 182 mm × 182 mm TOPCon cells fabricated in 2024 with laser-assisted firing.

The TOPCon solar cells employed a boron-doped p⁺ emitter, along with a front-side passivation stack consisting of unintentionally grown silicon dioxide (SiOₓ), aluminium oxide (Al₂O₃), and hydrogenated silicon nitride (SiNₓ:H), capped with a screen-printed H-pattern silver (Ag) contact grid. On the rear side, the structure comprised a SiO₂/phosphorus-doped n⁺ polycrystalline silicon/SiNₓ:H stack, also contacted by a screen-printed H-pattern Ag grid.

The researchers encapsulated the cells with different bill of materials (BOMs): two types of ethylene vinyl acetate (EVA); two types of polyolefin elastomer (POE); and one type of EVA-POE-EVA (EPE). They also used commercial coated polyethylene terephthalate (PET) composite (CPC) backsheets.

“The mini modules were laminated at 153 C for 8 min under standard industrial lamination conditions,” the academics explained. “All modules underwent DH test at 85 C and 85% relative humidity (RH) in an ASLi climate chamber for up to 2,000 h to study humidity-induced failures.

Schematic of the TOPCon solar cells and modules

Image: UNSW, Solar Energy Materials and Solar Cells, CC BY 4.0

The tests showed that maximum power losses ranged from 6% to 16%, with the difference among these values depending strongly on the encapsulation BOM.

“The modules with POE on both sides were the most stable at around 8%, while those using white EVA on the rear side, especially in combination with EPE, showed the largest losses at around 16%,” said Hoex. “The primary driver of the degradation was a reduction in open-circuit voltage rather than the increased series resistance after DH testing, which diverges from previous findings that predominantly attributed DH-induced degradation to metallisation corrosion.”

The research team explained that higher levels of degradation were attributable to additives containing magnesium (Mg) in white EVA, which migrate under DH, hydrate, and create an alkaline micro-environment. “This alkaline chemistry corrodes the rear SiNx passivation layer, increases interfacial hydrogen concentration, induces local pinhole-like defects, and raises dark saturation current, ultimately reducing open-circuit voltage,” Hoex emphasized.

The scientists also explained that, although Mg in white EVA encapsulants and its role in acetic acid–induced degradation was previously reported, the effect of MgO on performance degradation in TOPCon modules was not explicitly studied.

Their findings are available in the paper “A novel damp heat-induced failure mechanism in PV modules (with case study in TOPCon),”  published in Solar Energy Materials and Solar Cells.

“We hope this work helps refine encapsulant and BOM selection strategies for next-generation TOPCon modules, particularly for humid-climate deployment,” Hoex concluded. “It provides clear guidance for controlling Mg content in rear encapsulants and optimising rear-side passivation robustness. The mechanistic insights from this study have already informed upstream design changes, substantially reducing risk in commercial modules.”

Other research by UNSW showed the impact of POE encapsulants in TOPCon module corrosion, soldering flux on TOPCon solar cell performancedegradation mechanisms of industrial TOPCon solar modules encapsulated with ethylene vinyl acetate (EVA) under accelerated damp-heat conditions, as well as the vulnerability of TOPCon solar cells to contact corrosion and three types of TOPCon solar module failures that were never detected in PERC panels.

Furthermore, UNSW scientists investigated sodium-induced degradation of TOPCon solar cells under damp-heat exposure, the role of ‘hidden contaminants’ in the degradation of both TOPCon and heterojunction devices, and the impact of electron irradiation on PERC, TOPCon solar cell performance.

More recently, another UNSW rsearch team developed an experimentally validated model linking UV-induced degradation in TOPCon solar cells to hydrogen transport, charge trapping, and permanent structural changes in the passivation stack.

Study finds much lower-than-expected degradation in 1980s and 1990s solar modules

30 January 2026 at 12:21

Researchers at SUPSI found that six Swiss PV systems installed in the late 1980s and early 1990s show exceptionally low degradation rates of just 0.16% to 0.24% per year after more than 30 years of operation. The study shows that thermal stress, ventilation, and material design play a greater role in long-term module reliability than altitude or irradiance alone.

A research group led by Switzerland's University of Applied Sciences (SUPSI) has carried out a long-term analysis of six south-facing, grid-connected PV systems installed in Switzerland in the late 1980s and early 1990s. The researchers found that the systems’ annual power loss rates averaged 0.16% to 0.24%, significantly lower than the 0.75% to 1% per year commonly reported in the literature.

The study examined four low-altitude rooftop systems located in Möhlin (310m-VR-AM55), Tiergarten East and West in Burgdorf (533m-VR-SM55(HO)), and Burgdorf Fink (552m-BA-SM55). These installations use ventilated or building-applied rooftop configurations. The analysis also included a mid-altitude utility-scale plant in Mont-Soleil (1270m-OR-SM55) and two high-altitude, facade-mounted systems in Birg (2677m-VF-AM55) and Jungfraujoch (3462m-VF-SM75).

All systems are equipped with either ARCO AM55 modules manufactured by US-based Arco Solar, which was the world’s largest PV manufacturer with just 1 MW capacity at the time, or Siemens SM55, SM55-HO, and SM75 modules. Siemens became Arco Solar’s largest shareholder in 1990. The modules have rated power outputs between 48 W and 55 W and consist of a glass front sheet, ethylene-vinyl acetate (EVA) encapsulant layers, monocrystalline silicon cells, and a polymer backsheet laminate.

The test setup included on-site monitoring of AC and DC power output, ambient and module temperatures, and plane-of-array irradiance measured using pyranometers. Based on site conditions, the researchers classified the installations into low-, mid-, and high-altitude climate zones.

“For benchmarking purposes, two Siemens SM55 modules have been stored in a controlled indoor environment at the Photovoltaic Laboratory of the Bern University of Applied Sciences since the start of the monitoring campaign,” the researchers said. They also applied the multi-annual year-on-year (multi-YoY) method to determine system-level performance loss rates (PLR).

The results show that PLRs across all systems range from -0.12% to -0.55% per year, with an average of -0.24% to -0.16% per year, well below typical degradation rates reported for both older and modern PV systems. The researchers also found that higher-altitude systems generally exhibit higher average performance ratios and lower degradation rates than comparable low-altitude installations, despite exposure to higher irradiance and ultraviolet radiation.

The study further revealed that modules of the same nominal type but with different internal designs show markedly different degradation behaviour. Standard SM55 modules exhibited recurring solder bond failures, leading to increased series resistance and reduced fill factor. By contrast, SM55-HO modules benefited from a modified backsheet design that provides higher internal reflectance and improved long-term stability.

Overall, the findings indicate that long-term degradation in early-generation PV modules is driven primarily by thermal stress, ventilation conditions, and material design, rather than altitude or irradiance alone. Modules installed in cooler, better-ventilated environments demonstrated particularly stable performance over multiple decades.

The test results were presented in the paper “Three decades, three climates: environmental and material impacts on the long-term reliability of photovoltaic modules,” published in EES Solar.

“The study identified the bill-of-material (BOM) as the most critical factor influencing PV module longevity,” they concluded. “Despite all modules belonging to the same product family, variations in encapsulant quality, filler materials, and manufacturing processes resulted in significant differences in degradation rates. Early-generation encapsulants without UV stabilisation showed accelerated ageing, while later module designs with optimised backsheets and improved production quality demonstrated outstanding long-term stability.”

 

Dutch utility testing ‘silent’ residential heat pumps

30 January 2026 at 07:53

Dutch utility Eneco is testing low-noise air-to-water heat pumps from startup Whspr in around 20 homes, aiming to ease installation constraints near property boundaries. The systems reportedly achieve coefficients of performance of up to 5 and show up to 80% noise reduction in laboratory testing.

Dutch utility Eneco has begun testing an”innovative” type of air-to-water heat pump with low sound levels in residential buildings.

The company said conventional heat pumps rely on outdoor units that emit a constant hum, requiring installations several metres from property boundaries under Dutch building regulations and often forcing placement in prominent locations on terraced houses. By contrast, the “silent” heat pumps under test can be installed just 30 cm from the boundary.

“The pilot will provide insight into both ease of installation and real-world performance,” Eneco said in a statement. “The results will be used to further optimize the system, with the aim of making it widely available by the end of the summer.” The company added that around 20 homes are currently equipped with the systems to assess noise levels without “compromising residents’ everyday heating comfort.”

The heat pumps are supplied by Dutch startup Whspr. “Our 4 kW freestanding hybrid monoblock systems are designed for domestic space heating,” founder Hugo Huis in ’t Veld told pv magazine.

The unit measures 60 cm × 60 cm × 90 cm and weighs around 70 kg. “It is compact yet robust,” Huis in ’t Veld said, adding that initial measurements show efficiencies in line with the market, with coefficients of performance (COP) of between 4.5 and 5.0.

According to the manufacturer, the heat pump uses propane (R290) as its refrigerant and shows up to 80% noise reduction in laboratory testing.

Whspr also highlights ease of installation, stating that a single installer can fit and connect the unit, including the water side, in one day. A dedicated control and thermostat system has also been developed to reduce compatibility issues and simplify commissioning.

Further technical details have not yet been disclosed. “We are not at liberty to share designs at this stage, as patents are still pending,” Huis in ’t Veld said.

Eneco noted that pilot installations include both standard locations and more complex sites, such as rooftops and sheds at the end of gardens. The systems have also been installed in several rental homes owned by housing association Wooncompagnie. “Testing will continue until the end of April, after which the heat pumps will be further optimized,” the company said.

 

 

 

Scientists design low-cost sodium-ion battery with cheap electrode materials

23 January 2026 at 12:37

Conceived for stationary energy storage, the proposed sodium-ion battery configuration relies on an P2-type cathode material and an hard carbon anode material that reportedly ensure full-cell performance. Electrochemical testing revealed initial capacities of 200 mAh/g for the cathode and 360 mAh/g for the anode with capacity retentions of 42% and 67.4% after 100 cycles.

An international research team has designed a sodium-ion battery (SIB) storage system based on a P2-type cathode material known as Na0.67Mn0.33Ni0.33Fe0.33O2 and an anode relying on a hard carbon material fabricated from lavender flowers.

The proposed system configuration is intended for low-cost fabrication while ensuring scalability and environmental sustainability, as the two electrode materials are described as “widely accessible” precursors.

“Plant diversity and production capacity are important factors affecting the commercialization of SIBs, as plant-derived hard carbons s are both sustainable and economical,” the researchers explained. “Hard carbon derived from plants preserves the microstructures of the plant tissues, thereby enhancing the penetration of the electrolyte and sodium diffusivity.

The scientists estimated global lavender production at approximately 1,000–1,500 tons annually. However, only a small fraction of this production can be used for electrode materials, as only the flower residue is suitable for conversion into hard carbon.

They also noted that the hard carbon anode and P2-type cathode in the full cell have insufficient sodium reservoirs, leading to poor electrochemical performance. “The present work addresses this gap by evaluating the full-cell performance of P2-Na0.67Mn0.9Ni0.1O2 coupled with lavender flower waste-derived hard carbon under different presodiation approaches,” they further explained.

The scientists used X-ray diffraction (XRD), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), Fourier-transform infrared spectroscopy (FTIR), and Raman spectroscopy to characterized the SIB system's cathode and anode and found the cathode has an hexagonal P63/mmc structure, while the anode showed characteristic broad peaks of amorphous carbon.

SEM and TEM revealed, in particular, micrometer-sized cathode grains and a porous hard carbon surface, with EDS and XPS indicated the material has good structural stability. Further analysis also demonstrated that nickel (Ni) incorporation improved the cathode’s structural, electronic, and electrochemical performance.

Moreover, electrochemical testing revealed initial capacities of 200 mAh/g for the cathode and 360 mAh/g for the anode with capacity retentions of 42% and 67.4% after 100 cycles. Overall, Ni doping was found to improve the cathode’s conductivity and stability, and the anode demonstrated good sodium storage performance, supporting strong half-cell and potential full-cell performance, according to the researchers.

“This comprehensive study highlights the potential for developing SIBs with low-cost and sustainable electrode materials,” they concluded. “The optimization of presodiation strategies offers an opportunity for advanced commercial and scalable SIB technologies.”

The system was described in the study “Cost-effective sodium-ion batteries using a Na0.67Mn0.9Ni0.1O2 cathode and lavender-flower-waste-derived hard carbon with a comparative presodiation approach,” published in the Journal of Power Sources. The research team comprised scientists from Turkey's Inonu University, Istanbul Technical University, Malatya Turgut Ozal University and Aksaray University, as well as from Korea Institute of Science and Technology and Pakistan's Quaid-i-Azam University, among others. 

❌