Normal view

Received yesterday — 6 April 2026

All emerging cyber threats targeting power infrastructure at a glance

Researchers in Moroco analyzed cybersecurity challenges in smart grids, highlighting AI-driven detection and defense strategies against threats like distributed denial-of-service, false data injection replay, and IoT-based attacks. They recommend multi-layered protections, real-time anomaly detection, secure IoT devices, and staff training to enhance resilience and safeguard power system operations.

Researchers at Morocco's Higher School of Technology, Moulay Ismail University, have conducted a comprehensive analysis of emerging cybersecurity challenges in power systems and detailed recent advances in detection and defense strategies.

Their work emphasizes the growing role of AI in enhancing control, protection, and resilience in modern smart grids. It also classifies cyber threats by origin, impact, and affected system layers to provide a structured understanding and reviews machine learning and optimization-based intrusion detection systems (IDSs) for power systems.

The researchers highlighted that renewable smart grids face diverse cyber threats that can disrupt operations and compromise data. Distributed denial-of-service (DDoS) attacks, for example, flood networks with traffic, blocking legitimate access and delaying control actions, while data integrity attacks manipulate sensor or control data, causing incorrect decisions or blackouts.

Additionally, replay attacks retransmit intercepted data to confuse the system, and false data injection attacks subtly alter real-time data to mimic normal operations while disrupting the grid. Covert attacks inject hidden signals that manipulate system behavior without detection, whereas IoT device-based attacks exploit vulnerabilities in meters or sensors to spread malware, steal data, or launch DoS attacks.

Finally, zero dynamics attacks leverage system models to generate hidden signals that leave output measurements unchanged but affect operations, posing sophisticated stealth threats to smart grid security.

 Do you want to strengthen and enhance the cyber security of your solar energy assets to safeguard them against emerging threats?

Join us on Apr. 29 for pv magazine Webinar+ | Decoding the first massive cyberattack on Europe’s solar energy infrastructure – The Poland case and lessons learned

The researchers warned that while smart grids have improved energy efficiency and flexibility through advanced communication tools and distributed energy sources, they have also introduced new cyber vulnerabilities. Threats such as phishing, malware, denial-of-service (DoS) attacks, and false data injection (FDI) can disrupt operations, compromise data, and damage infrastructure.

They recommend implementing defense strategies that maintain confidentiality, integrity, and availability, while also incorporating authentication, authorization, privacy, and reliability. Machine learning and data-driven intrusion detection systems can help identify anomalies and detect FDI attacks in real time, particularly in smart grids and industrial control systems such as SCADA, which rely on accurate sensor measurements for state estimation.

The research team also encouraged energy asset owners and grid operators to adopt substation security measures and protocol vulnerability analyses to detect risks at the hardware and network levels. Blockchain, distributed ledgers, and Hilbert-Huang transform methods are highlighted as tools to further strengthen cybersecurity.

IoT devices, including sensors and smart meters, should be secured with strong authentication, safe boot procedures, frequent firmware updates, and standardized security across manufacturers. Sensitive grid data should be protected using techniques such as homomorphic encryption to maintain confidentiality during storage and transmission.

“A multi-tiered security approach that includes firewalls, intrusion detection systems, and network segmentation can enhance grid resilience. Extracting critical elements from vulnerable IoT devices and leveraging redundant control channels ensures operational continuity during attacks,” the researchers stated.

Machine learning and anomaly detection systems should be deployed to enable real-time identification of irregular activities, including FDI and malware propagation. Standardized protocols and rapid incident response measures should also support collaboration among grid operators, IoT manufacturers, and regulators, facilitated by information-sharing platforms.

The researchers emphasize that human-centered attacks, including phishing and social engineering, remain significant threats, but these can be mitigated through regular staff and user training.

The review was presented in “Cybersecurity challenges and defense strategies for next-generation power systems,” published in Cyber-Physical Energy Systems.

 

 

New intrusion detection systems boost protection of SCADA systems against cyber threats

An international reserch team developed two deep learning-based IDS models to enhance cybersecurity in SCADA systems. The hybrid approach reportedly improves detection of complex and novel cyber threats with high accuracy, adaptability, and efficiency, outperforming traditional methods across multiple datasets.

A Saudi-British research team has develeped two new deep learning-based intrusion detection systems (IDSs) that can reportedly improve the cybersecurity of SCADA networks.

In large-scale solar power plants, SCADA systems play a vital role by overseeing energy generation, monitoring the performance of solar panels, optimizing output, identifying potential faults, and maintaining smooth overall operations. In essence, they act as the central system that converts raw solar data into practical control decisions, ensuring the plant operates safely, efficiently, and profitably.

The scientists explaind that current cybersecurity frameworks are often inadequate for SCADA systems because they cannot fully cope with the complexity and constantly evolving nature of modern cyber threats. Most existing approaches rely on signature-based detection, which depends on prior knowledge of attack patterns and therefore fails to detect zero-day exploits or novel intrusion techniques.

To address this limitation, the researchers considered deep learning methods, as these techniques allows to process large volumes of data, identify complex patterns, and enable more proactive threat detection.

“Such capability of handling and analyzing big data is particularly useful during scenarios when SCADA systems are generating huge streams of real-time data, including sensor readings, control commands, and other system logs,” they explained. “Furthermore, deep learning methods, especially convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have shown outstanding performances in the detection of complex attack scenarios with sequential or spatial patterns in data.”

 Do you want to strengthen and enhance the cyber security of your solar energy assets to safeguard them against emerging threats?

Join us on Apr. 29 for pv magazine Webinar+ | Decoding the first massive cyberattack on Europe’s solar energy infrastructure – The Poland case and lessons learned

Industry experts will explore real-world cyberattack scenarios, highlight potential vulnerabilities in solar and storage systems, and share practical, actionable strategies to protect your energy assets. Attendees will gain valuable knowledge on how to anticipate, prevent, and respond to cyber threats in the rapidly evolving solar energy sector.

The proposed approach integrates two new IDSs, named the Spike Encoding Adaptive Regulation Kernel (SPARK) and the Scented Alpine Descent (SAD) algorithm. By leveraging their complementary strengths, the method reportedly improves spike-threshold accuracy while enhancing adaptability and robustness under dynamic conditions.

The SPARK model introduces adaptive spike encoding by dynamically adjusting thresholds based on input signal characteristics. It uses advanced statistical methods to respond to variations in neural input, improving sensitivity to changes in intensity and frequency. By integrating both temporal and spatial features, SPARK enhances information encoding, especially for complex datasets. Unlike traditional fixed-threshold methods, it provides context-aware thresholding, improving accuracy and reliability.

The SAD algorithm complements SPARK by offering an optimization strategy inspired by olfactory navigation, which is the process by which animals and organisms use odor cues to locate food, mates, or home, and Lévy flight behavior, which is a strategy obeserved in many animal species to randomly search for a target in an unknown environment. This purportedly enables efficient exploration of solution spaces and avoids local minima, ensuring optimal threshold selection.

The hybrid approach can dynamically adjust and optimize spike thresholds simultaneously, surpassing conventional static or isolated approaches, according to scientists, which noted that the SPARK model is well-suited for SCADA and IoT systems due to its scalability, real-time adaptability, and efficient data handling. Additionally, its lightweight design reduces computational overhead and false positives, making it effective for resource-constrained environments.

“SAD is complementary to SPARK in the sense that it focuses on improving the detection accuracy while maintaining computational efficiency,” the researchers emphasized. “SAD's anomaly scoring mechanism can be integrated into this framework to add another layer of detection, which can run parallel with SPARK. In effect, integrating the deep learning models into the scoring mechanism means that SAD would enable a much more fine-grained analysis of attack patterns with little noticeable impact on performance for the SCADA system in question.”

The researchers used multiple benchmark datasets are used to evaluate SCADA intrusion detection performance, including the Secure Water Treatment (SWaT) testbed, Gas Pipeline, WUSTL-IIoT, and Electra. These datasets capture diverse industrial environments, attack types, and operational conditions, enabling comprehensive testing. They also include time-series sensor data, actuator commands, and labeled attack scenarios such as denial-of-service (DoS), distributed denial-of-service (DDoS), malware, and injection attacks.

The diversity of datasets ensured accurate modeling of both normal behavior and complex anomalies in SCADA and IIoT systems, according to the research team. Standardized preprocessing, training, and evaluation procedures also enabled comparison across all tested models. Cross-validation and controlled training conditions, meanwhile, reportedly prevented bias and ensured reliable generalization results. Visualization tools such as histograms, loss curves, and confusion matrices provided insights into model behavior and anomaly detection.

The SPARK model was found to consistently demonstrate “superior” performance, achieving high accuracy, precision, and recall across datasets. It outperformed traditional machine learning and deep learning approaches in detecting diverse intrusion types.

“The findings underline, in summary, that the SPARK and SAD models are basically the final frontier in modern intrusion detection,” the scientists said. “Distinctly designed to provide improved detection capabilities and operational efficiency, the two designs also chart a way into more resilient and intelligent security solutions for modern industrial controlled systems (ICSs) and Internet-of-Things (IoT) networks.”

The novel IDSs have been presented in “SPARK and SAD: Leading-edge deep learning frameworks for robust and effective intrusion detection in SCADA systems,” published in the International Journal of Critical Infrastructure Protection. The research team comprised academics form the Leeds Beckett University in the United Kingdom and King Abdulaziz University in Saudi Arabia. 

Received before yesterday

Human Error in Cybersecurity and the Growing Threat to Data Centers

19 January 2026 at 17:00

Cyber incidents haven’t ceased to escalate in 2025, and they keep making their presence felt more and more impactfully as we transition into 2026. The quick evolution of novel cyber threat trends leaves data centers increasingly exposed to disruptions extending beyond the traditional IT boundaries.

The Uptime Institute’s annual outage analysis shows that in 2024, cyber-related disruptions occurred at roughly twice the average rate seen over the previous four years. This trend aligns with findings from Honeywell’s 2025 Cyber Threat Report, which identified a sharp increase in ransomware and extortion activity targeting operational technology environments based on large-scale system data.

There are many discussions today around infrastructure complexity and attack sophistication, but it’s a lesser-known reality that human error in cybersecurity remains a central factor behind many of these incidents. Routine configuration changes, access decisions, or decisions taken under stress can create conditions that allow errors to sneak in. Looking at high-availability environments, human error often becomes the point at which otherwise contained threats begin to escalate into bigger problems.

As cyberattacks on data centers continue to grow in number, downtime is carrying heavier and heavier financial and reputational consequences. Addressing human error in cybersecurity means recognizing that human behavior plays a direct role in how a security architecture performs in practice. Let’s take a closer look.

How  Attackers Take Advantage of Human Error in Cybersecurity

Cyberattacks often exploit vulnerabilities that stem from both superficial, maybe even preventable mistakes, as well as deeper, systemic issues. Human error in cybersecurity often arises when established procedures are not followed through consistently, which can create gaps that attackers are more than eager to exploit. A delayed firmware update or not completing maintenance tasks can leave infrastructure exposed, even when the risks are already known. And even if organizations have defined policies to reduce these exposures, noncompliance or insufficient follow-through often weakens their effectiveness.

In many environments, operators are aware that parts of their IT and operational technology infrastructure carry known weaknesses, but due to a lack of time or oversight, they fail to address them consistently. Limited training also adds to the problem, especially when employees are expected to recognize and respond to social engineering techniques. Phishing, impersonation, and ransomware attacks are increasingly targeting organizations with complex supply chains and third-party dependencies, and in these situations, human error often enables the initial breach, after which attackers move laterally through systems, using minor mistakes to trigger disruptions.

Why Following Procedures is Crucial

Having policies in place doesn’t always guarantee that the follow-through will be consistent. In everyday operations, teams often have to juggle many things at once: updates, alerts, and routine maintenance, and small steps can be missed unintentionally. Even experienced staff can make these kinds of mistakes, especially when managing large or complex environments over an extended period of time. Gradually, these small oversights can add up and leave systems exposed.

Account management works similarly. Password rules, or the policies for the handling of inactive accounts are usually well-defined; however, they are not always applied homogeneously. Dormant accounts may go unnoticed, and teams can fall behind on updates or escape regular review. Human error in cybersecurity often develops step by step through workloads, familiarity, and everyday stress, and not because of a lack of skill or awareness.

The Danger of Interacting With Social Engineering Without Even Knowing

Social engineering is a method of attack that uses deception and impersonation to influence people into revealing information or providing access. It relies on trust and context to make people perform actions that appear harmless and legitimate at the moment.

The trick of deepfakes is that they mirror everyday communication very accurately. Attackers today have all the tools to impersonate colleagues, service providers, or internal support staff. A phone call from someone claiming to be part of the IT help desk can easily seem routine, especially when framed as a quick fix or standard check. Similar approaches can be seen in emails or messaging platforms, and the pattern is the same: urgency overrides safety.

With the various new tools available, visual deception has become very common. Employees may be directed to login pages that closely resemble internal systems and enter credentials without hesitation. Emerging techniques like AI-assisted voice or video impersonation further blur the line between legitimate requests and malicious activity, making social engineering interactions very difficult to recognize in real time.

Ignoring Security Policies and Best Practices

It’s not enough if security policies exist only as formal documentation, but are not followed consistently on the floor. Sometimes, even if access procedures are defined, employees under the pressure of time can make undocumented exceptions. Access policies, or change management rules, for example, require peer review and approval, but urgent maintenance or capacity pressures often lead to decisions that bypass those steps.

These small deviations create gaps between how systems are supposed to be protected and how they are actually handled. When policies become situational or optional, security controls lose their purpose and reliability, leaving the infrastructure exposed, even though there’s a mature security framework in place.

When Policies Leave Room for Interpretation

Policies that lack precision introduce variability into how security controls are applied across teams and shifts. When procedures don’t explicitly define how credentials should be managed on shared systems, retained login sessions, or administrative access can remain in place beyond their intended scope. Similarly, if requirements for password rotation or periodic access reviews are loosely framed or undocumented, they are more likely to be deferred during routine operations.

These conditions rarely trigger immediate alerts or audit findings. However, over time, they accumulate into systemic weaknesses that expand the attack surface and increase the likelihood of attacks.

Best Practices That Erode in Daily Operations

Security issues often emerge through slow, incremental changes. When operational pressure increases, teams might want to rely on more informal workarounds to keep everything running. Routine best practices like updates, access reviews, and configuration standards can slip down the priority list or become sloppy in their application. Individually, all of these decisions can seem reasonable at the moment; over time, however, they do add up and dilute the established safeguards, which leaves the organization exposed even without a single clearly identifiable incident.

Overlooking Access and Offboarding Control

Ignoring best practices around access management introduces the next line of risks. Employees and third-party contractors often retain privileges beyond their active role if offboarding steps are not followed through. In the lack of clear deprovisioning rules, like disabling accounts, dormant access can linger on unnoticed. These inactive accounts are not monitored closely enough to detect and identify if misuse or compromise happens.

Policy Gaps During Incident Response

The consequences of ignoring procedures become most visible when an actual cybersecurity incident occurs. When teams are forced to act quickly without clear guidance, errors start to surface. Procedures that are outdated, untested, or difficult to locate offer little support during an emergency. There’s no policy that can eliminate risks completely, however, organizations that treat procedures as living, enforceable tools are better positioned to respond effectively when an incident occurs.

A Weak Approach to Security Governance

Weak security governance often allows risks to persist unnoticed, especially when oversight from management is limited or unclear. Without clear ownership and accountability, routine tasks like applying security patches or reviewing alerts can be delayed or overlooked, leaving systems exposed. These seemingly insignificant gaps create an environment over time in which vulnerabilities are known but not actively addressed.

Training plays a very important role in closing this gap, but only when it is treated as part of governance,and not as an isolated activity. Regular, structured training helps employees develop a habit of verification and reinforces the checks and balances defined by organizational policies. To remain effective, training has to evolve in tandem with the threat landscape. Employees need ongoing exposure to emerging attack techniques and practical guidance on how to recognize and respond to them within their daily workflows. Aligned governance and training help organizations position themselves better to reduce risk driven by human factors.

Understanding the Stakes

Human error in cybersecurity is often discussed as a collection of isolated missteps, but in reality, it reflects how people operate within complex systems under constant pressure.

In data center environments, these errors rarely occur as isolated events but are influenced by interconnected processes, tight timelines, and attackers who deliberately exploit trust, familiarity, and routine behavior. Looking at it from this angle, human error doesn’t show only individual mistakes but provides insight into how risks develop across an organization over time.

Recognizing the role of human error in cybersecurity is essential for reducing future incidents, but awareness alone is not enough. Training also plays an important role, but it cannot compensate for unclear processes, weak governance, or a culture that prioritizes speed more than safety.

Data center operators have to continuously adapt their security practices and reinforce expectations through daily operations instead of treating security best practices as rigid formalities. Building a culture where employees understand how their actions influence security outcomes helps organizations respond more effectively to evolving threats and limits the conditions that allow small errors to turn into major, devastating incidents.

# # #

About the Author

Michael Zrihen  is the Senior Director of Marketing & Internal Operations Manager at Volico Data Centers.

The post Human Error in Cybersecurity and the Growing Threat to Data Centers appeared first on Data Center POST.

❌