As the Trump administration doubles down on its energy and AI dominance agenda, U.S. energy companies have found themselves navigating tricky communication strategies. Touting the clean, carbon-free nature of renewable energy no longer carries the clout it did under the Biden administration, and policy has shifted against certain forms of renewables. At the same time, energy companies are being called upon to meet rising power demands of data-center developers, many of which are prioritizing carbon-free options.
This has forced energy companies to shift the way they communicate: They must garner political favor while also positioning themselves as an answer to the coming onslaught of electricity demand.
The wind and solar industries are focusing on electricity affordability and the fact that wind farms and photovoltaics are the cheapest and fastest way to add new energy generation. Battery storage developers are aligning themselves with Trump’s domestic manufacturing push, scaling up efforts to shift supply chains to the United States as they battle uncertainty over tariffs.
Nuclear power companies are touting their ability to go small and modular—theoretically a faster way to get reactors running. Next-generation geothermal developers are staying the course but playing up the industry’s crossovers with oil and gas. Hydrogen, too, is being highlighted as similar to fossil fuels. And the offshore wind industry is mostly preoccupied with using the courts to fight the Trump administration’s repeated attempts to ban development.
It’s not that the renewable technologies themselves have changed, says Samuel Furfari, former European Commission senior energy official and current energy geopolitics professor at ESCP Business School in London. “Mr. Trump has made a communication revolution, not an energy revolution,” he says about the state of the industry in the United States and abroad.
Trump Declares His Energy Darlings
Trump’s affinity for fossil fuels and his disdain for certain renewables, such as wind, have constructed a new federal hierarchy of energy sources. On day one of his second term as U.S. president, Trump issued an executive order listing which energy resources his country should promote. The list mentions fossil fuels, geothermal, and nuclear but excludes solar, wind, and hydrogen.
Then, in July, the One Big Beautiful Bill Act slashed renewable energy incentives for wind and solar while extending the tax credits for geothermal through 2033. On 1 December, Trump’s Department of Energy renamed the National Renewable Energy Laboratory to the National Laboratory of the Rockies—a moniker to demote renewables and reflect the lab’s “expanding mission” under Trump. And in an eleventh-hour move, the Department of the Interior at the end of 2025 halted all offshore wind projects under construction, citing national security risks.
At first, the wind and solar industries attempted to fit into the Trump administration’s agenda by leaning into his energy dominance rhetoric, says clean energy consultant Lloyd Ritter in Washington D.C. But after the government gutted tax incentives for wind and solar, and concerns over high electricity bills became a top election issue, industry players prioritized messaging about affordability for consumers, Ritter says.
“Electricity costs are now a thing in politics, and I don’t think that’s going to change anytime soon,” Ritter says. The cost concerns stem from estimates that electricity use in the United States is projected to increase 32 percent by 2030, mostly from data centers, according to the latest forecast from Grid Strategies.
The solar and storage industries are welcoming these demand projections. That’s because solar is still the “fastest and cheapest form of electronics to get onto the grid,” says Raina Hornaday, cofounder of Austin, Texas–based Caprock Renewables, a solar and storage developer. In her view, meeting the load demands of data centers is going to take care of the political backlash that solar and storage have endured under the Trump administration.
Hornaday sees a particular opening for batteries. “The R&D for battery storage is really the winner across the board, and we don’t consider battery storage renewable. It can utilize renewable energy electrons, but it doesn’t have to,” she says. “It can be power from the grid.”
Sage Geosystems harvests heat from underground water reservoirs. The company has recently shifted from talking about geothermal energy as clean to its ability to get electricity to the grid faster to accommodate data-center growth. Sage Geosystems
Geothermal Inherits Fortuitous Position
The communications framing for next-generation geothermal power has shifted too, despite it being a political favorite. Companies in this sector say they are continuing to emphasize geothermal as a baseload power source—something that can crank out electricity 24/7, like fossil fuels can. But projected increases in power demand have shifted other elements of the conversation.
The leading communication strategies now are less about geothermal’s carbon-free benefits and more about getting energy to the grid faster to address data-center growth, says Cindy Taff, CEO of Houston-based startup Sage Geosystems. Geothermal companies are also talking about how their use of drilling technology, know-how, and other synergies borrowed from the oil and gas industries can fast-track development.
“When we first started Sage four and a half years ago, we were talking about it being clean and renewable, but if you think about it, there’s now a little bit more allergic connotation with clean and renewable,” says Taff, who spent more than 35 years in well construction and project management at Shell before founding Sage.
Lessening the use of climate-focused language is something “the whole industry” is doing, adds Geoffrey Garrison, vice president of operations at Quaise Energy, headquartered in Houston. “I think you have to be cognizant of who’s listening and who has got their hands on the lever.… You tailor your message,” he says.
Other Trump administration priorities, like moving industry and manufacturing back to U.S. soil, are top of mind for geothermal companies, says Sarah Jewett, senior vice president of strategy at Fervo Energy, also in Houston. “We are thinking a lot more about localization of [the] supply chain, in large part due to this administration’s focus,” Jewett says.
In its pitches to investors, Fervo Energy includes talking points about how geothermal energy drilling uses technology from the oil and gas industry. Fervo Energy
Overall, Fervo’s messaging has remained “pretty consistent” between U.S. presidential administrations, Jewett says. In its pitch to investors, Fervo includes talking points about how next-generation geothermal uses drilling technology from the oil and gas industry. But clean energy isn’t completely missing from Fervo’s communications. “Some sides of the aisle like parts of it, and other parts of the aisle like other parts of it,” Jewett says.
It’s not just U.S. companies that are shifting the message. In November at ADIPEC, the world’s largest annual energy conference, held in Abu Dhabi, widely adopted buzzwords such as “energy transition”—a term referring to the shift away from fossil fuels—were being swapped with “energy addition.”
That’s not solely a result in shifting political tides. The surge in energy demand may indeed necessitate more of an addition, rather than a complete transition. It’s a reasonable shift, given the “hockey stick” demand increase the industry is facing, says Taff at Sage. “Energy transition was, in my opinion, when [demand] uptick was very steady. But now that you’ve got the hockey stick, the use of ‘addition’…is much more applicable,” she says.
Abroad, Trump’s impact reverberates, Furfari says. “We were shy to mention fossil fuel. Mr. Trump does not care, and says, ‘No, we need fossil fuel.’ This is changing the world.”
More than 97 percent of the new cars Norwegians registered in November 2025 were electric, almost reaching the country’s goal of 100 percent. As a result, the government has begun removing some of the many carrots it used to encourage its successful EV transition. Cecilie Knibe Kroglund, state secretary in the country’s Ministry of Transport, reveals some of the challenges that come with success.
What were the important early steps to promote the EV switch?
Kroglund: Battery-electric vehicles have had exemptions from the 25 percent value-added tax and from the CO2- and weight-based registration tax that apply to combustion-engine vehicles. We used other tax incentives to encourage building charging stations on highways and in rural areas. Cities had the opportunity to exempt zero-emissions cars from toll roads. EV drivers also got reduced ferry fares, free parking, and access to bus lanes in many cities. The technology for the vehicles wasn’t that good at the start of the incentives program, but we had the taxes and incentives to make traditional passenger cars more expensive.
What were the biggest barriers, and how did policymakers overcome them?
Kroglund: Early on the technology was challenging. In summertime it was easy to fuel the EV, but in wintertime it’s double the use of energy. But the technology has improved a lot in the last five years.
The Norwegian tax exemptions on EVs were introduced before EVs came to market and were decisive in offsetting the early disadvantages of EVs compared to conventional cars, especially regarding comfort, vehicle size, and range. The rapid expansion of charging infrastructure along major corridors has also been important to overcome range anxiety.
How have private companies responded to government incentives?
Kroglund: I’m personally surprised that it went so well. This was a long-term commitment from the government, and the market has responded to that. Many Norwegian companies use EVs. The market for charging infrastructure is considered commercially viable and no longer needs financial support. However, we don’t see commercial-vehicle adoption going as fast as passenger vehicles, and we had the same goal. So we will have to review the goals, and we’ll have to review the incentives.
What unexpected new problems is Norway’s success creating?
Kroglund: The success of the passenger-vehicle policies mean EVs are in competition with public transport in the larger cities. Driving an EV remains much cheaper than driving a conventional car even without tax exemptions, and overall car use continues to rise. National, regional, and local governments must find different tools to promote walking, bicycling, and public transport because each city and region is different.
How applicable are these lessons to poorer or less well-administered countries and why?
Kroglund: We are different as countries. The geographies are different, and some countries have even bigger cities than our national population. This is not a policy for L.A., but what we see in Norway is that incentives work. However, tax incentives are only applicable in systems where effective taxation is established, which may not be the case in poorer countries. Other benefits, such as lower local emissions, only apply in places with lots of traffic.
The Norwegian experience shows that the economic incentives work, but it also shows that EVs work even in a country with cold weather.
This article appears in the February 2026 print issue as “Cecilie Knibe Kroglund.”
In 2018, Justin Kropp was working on a transmission circuit in Southern California when disaster struck. Grid operators had earlier shut down the 115-kilovolt circuit, but six high-voltage lines that shared the corridor were still operating, and some of their power snuck onto the deenergized wires he was working on. That rogue current shot to the ground through Kropp’s body and his elevated work platform, killing the 32-year-old father of two.
“It went in both of his hands and came out his stomach, where he was leaning against the platform rail,” says Justin’s father, Barry Kropp, who is himself a retired line worker. “Justin got hung up on the wire. When they finally got him on the ground, it was too late.”
Budapest-based Electrostatics makes conductive suits that protect line workers from unexpected current. Electrostatics
Justin’s accident was caused by induction: a hazard that occurs when an electric or magnetic field causes current to flow through equipment whose intended power supply has been cut off. Safety practices seek to prevent such induction shocks by grounding all conductive objects in a work zone, giving electricity alternative paths. But accidents happen. In Justin’s case, his platform unexpectedly swung into the line before it could be grounded.
Conductive Suits Protect Line Workers
Adding a layer of defense against induction injuries is the motivation behind Budapest-based Electrostatics’ specialized conductive jumpsuits, which are designed to protect against burns, cardiac fibrillation, and other ills. “If my boy had been wearing one, I know he’d be alive today,” says the elder Kropp, who purchased a line-worker safety training business after Justin’s death. The Mesa, Ariz.–based company, Electrical Safety Consulting International (ESCI), now distributes those suits.
Conductive socks that are connected to the trousers complete the protective suit. BME HVL
Eduardo Ramirez Bettoni, one of the developers of the suits, dug into induction risk after a series of major accidents in the United States in 2017 and 2018, including Justin Kropp’s. At the time, he was principal engineer for transmission and substation standards at Minneapolis-based Xcel Energy. In talking to Xcel line workers and fellow safety engineers, he sensed that the accident cluster might be the tip of an iceberg. And when he and two industry colleagues scoured data from the U.S. Bureau of Labor Statistics, they found 81 induction accidents between 1985 and 2021 and 60 deaths, which they documented in a 2022 report.
“Unfortunately, it is really common. I would say there are hundreds of induction contacts every year in the United States alone,” says Ramirez Bettoni, who is now technical director of R&D for the Houston-based power-distribution equipment firm Powell Industries. He bets that such “contacts”—exposures to dangerous levels of induction—are increasing as grid operators boost grid capacity by squeezing additional circuits into transmission corridors.
Electrostatics’ suits are an enhancement of the standard protective gear that line workers wear when their tasks involve working close to or even touching energized live lines, or “bare-hands” work. Both are interwoven with conductive materials such as stainless steel threads, which form a Faraday cage that shields the wearer against the lines’ electric fields. But the standard suits have limited capacity to shunt current because usually they don’t need to. Like a bird on a wire, bare-hands workers are electrically floating, rather than grounded, so current largely bypasses them via the line itself.
Induction Safety Suit Design
Backed by a US $250,000 investment from Xcel in 2019, Electrostatics adapted its standard suits by adding low-resistance conductive straps that pass current around a worker’s body. “When I’m touching a conductor with one hand and the other hand is grounded, the current will flow through the straps to get out,” says Bálint Németh, Electrostatics’ CEO and director of the High Voltage Laboratory at Budapest University of Technology and Economics.
A strapping system links all the elements of the suit—the jacket, trousers, gloves, and socks—and guides current through a controlled path outside the body. BME HVL
The company began selling the suits in 2023, and they have since been adopted by over a dozen transmission operators in the United States and Europe, as well as other countries including Canada, Indonesia, and Turkey. They cost about $5,200 in the United States.
Electrostatics’ suits had to meet a crucial design threshold: keeping body exposure below the 6-milliampere “let-go” threshold, beyond which electrocuted workers become unable to remove themselves from a circuit. “If you lose control of your muscles, you’re going to hold onto the conductor until you pass out or possibly die,” says Ramirez Bettoni.
The gear, which includes the suit, gloves, and socks, protects against 100 amperes for 10 seconds and 50 A for 30 seconds. It also has insulation to protect against heat created by high current and flame retardants to protect against electric arcs.
Kropp, Németh, and Ramirez Bettoni are hoping that developing industry standards for induction safety gear, including ones published in October, will broaden their use. Meanwhile, the recently enacted Justin Kropp Safety Act in California, for which the elder Kropp lobbied, mandates automated defibrillators at power-line work sites.
On a blustery November day, a Cessna turboprop flew over Pennsylvania at 5,000 meters, in crosswinds of up to 70 knots—nearly as fast as the little plane was flying. But the bumpy conditions didn’t thwart its mission: to wirelessly beam power down to receivers on the ground as it flew by.
The test flight marked the first time power has been beamed from a moving aircraft. It was conducted by the Ashburn, Va.-based startup Overview Energy, which emerged from stealth mode in December by announcing the feat.
But the greater purpose of the flight was to demonstrate the feasibility of a much grander ambition: to beam power from space to Earth. Overview plans to launch satellites into geosynchronous orbit (GEO) to collect unfiltered solar energy where the sun never sets and then beam this abundance back to humanity. The solar energy would be transferred as near-infrared waves and received by existing solar panels on the ground.
The far-flung strategy, known as space-based solar power, has become the subject of both daydreaming and serious research over the past decade. Caltech’s Space Solar Power Project launched a demonstration mission in 2023 that transferred power in space using microwaves. And terrestrial power beaming is coming along too. The U.S. Defense Advanced Research Projects Agency (DARPA) in July 2025 set a new record for wirelessly transmitting power: 800 watts over 8.6 kilometers for 30 seconds using a laser beam.
But until November, no one had actively beamed power from a moving platform to a ground receiver.
Wireless Power Beaming Goes Airborne
Overview’s test transferred only a sprinkling of power, but it did it with the same components and techniques that the company plans to send to space. “Not only is it the first optical power beaming from a moving platform at any substantial range or power,” says Overview CEO Marc Berte, “but also it’s the first time anyone’s really done a power beaming thing where it’s all of the functional pieces all working together. It’s the same methodology and function that we will take to space and scale up in the long term.”
The approach was compelling enough that power-beaming expert Paul Jaffe left his job as a program manager at DARPA to join the company as head of systems engineering. Prior to DARPA, Jaffe spent three decades with the U.S. Naval Research Laboratory.
“This actually sounds like it could work.” –Paul Jaffe
It was hearing Berte explain Overview’s plan at a conference that helped to convince Jaffe to take a chance on the startup. “This actually sounds like it could work,” Jaffe remembers thinking at the time. “It really seems like it gets around a lot of the showstoppers for a lot of the other concepts. I remember coming home and telling my wife that I almost felt like the problem had been solved. So I thought: Should [I] do something which is almost unheard of—to leave in the middle of being a DARPA program manager—to try to do something else?”
For Jaffe, the most compelling reason was in Overview’s solution for space-based solar’s power-density problem. A beam with low power density is safer because it’s not blasting too much concentrated energy onto a single spot on the Earth’s surface, but it’s less efficient for the task of delivering usable solar energy. A higher-density beam does the job better, but then the researchers must engineer some way to maintain safety.
Startup Overview Energy demonstrates how space-based solar power could be beamed to Earth from satellites. Overview Energy
Space-Based Solar Power Makes Waves
Many researchers have settled on microwaves as their beam of choice for wireless power. But, in addition to the safety concerns about shooting such intense waves at the Earth, Jaffe says there’s another problem: Microwaves are part of what he calls the “beachfront property” of the electromagnetic spectrum—a range from 2 to 20 gigahertz that is set aside for many other applications, such as 5G cellular networks.
“The fact is,” Jaffe says, “if you somehow magically had a fully operational solar power satellite that used microwave power transmission in orbit today—and a multi-kilometer-scale microwave power satellite receiver on the ground magically in place today—you could not turn it on because the spectrum is not allocated to do this kind of transmission.”
Instead, Overview plans to use less-dense, wide-field infrared waves. Existing utility-scale solar farms would be able to receive the beamed energy just like they receive the sun’s energy during daylight hours. So “your receivers are already built,” Berte says. The next major step is a prototype demonstrator for low Earth orbit, after which he hopes to have GEO satellites beaming megawatts of power by 2030 and gigawatts by later that decade.
Plenty of doubts about the feasibility of space-based power abound. It is an exotic technology with much left to prove, including the ability to survive orbital debris and the exorbitant cost of launching the power stations. (Overview’s satellite will be built on Earth in a folded configuration, and it will unfold after it’s brought to orbit, according to the company.)
“Getting down the cost per unit mass for launch is a big deal,” Jaffe says. “Then, it just becomes a question of increasing the specific power. A lot of the technologies we’re working on at Overview are squarely focused on that.”
Ask the average driver what they want from a car, and it isn’t 0-to-60-mile-per-hour times or Nürburgring lap records. It’s something quiet, comfortable, reliable, and inexpensive to run. On all those fronts, the electric vehicle (EV) already offers a better experience than a gasoline car. EVs are more responsive, easier to maintain, and aligned with everyone’s idea of a sustainable future. After all, no one pictures a futuristic city cloaked in exhaust fumes.
Yet mass adoption isn’t driven by enthusiasts—it’s driven by the everyday buyer. And for that buyer, EVs remain too costly. Global EV sales passed roughly 20 percent of new cars in 2024, according to the International Energy Agency, but the inflection point for true mass adoption still lies ahead. Some major Western automakers are signaling caution: GM, for example, paused production of the Cadillac Lyriq and Vistiq in December and will run only a single shift at its Spring Hill, Tenn., plant through early 2026—an acknowledgment of softer near-term U.S. demand and rising costs. Meanwhile, global BEV growth is being pulled forward by China. If demand worldwide is rising while Western manufacturers slow production, the industry may be entering a major shake-out. Automakers cannot sustain a multiyear cost disadvantage against Chinese competitors, and only a handful that close that gap will emerge as long-term winners. And closing it ultimately comes down to building far cheaper batteries. To reach true mass-market penetration, EVs must match internal-combustion cars on both range and affordability—roughly 400 miles for US $20,000 to $25,000. That’s a tall order, because batteries make up about 40 percent of an EV’s cost, and the cells themselves dominate that figure. BloombergNEF’s most recent battery-price survey found that cell manufacturing is now the single biggest determinant of whether a vehicle can be profitably priced for the mass market.
Where the cost lies
About 70 percent of an EV battery cell’s cost comes from materials—the cathodes and anode active materials, separators, and current collectors—and 30 percent from manufacturing, according to data from Thunder Said Energy, an Austin, Texas–based consultancy focused on energy technologies. Progress on both fronts is vital. Chemistries such as lithium-iron-phosphate (LFP) and nickel-manganese-cobalt (NMC) are steadily improving in cost and performance, and researchers are exploring cheaper current-collector materials and boosting energy density with low-cost silicon-doped anodes. But even as materials evolve, the way we build cells has changed remarkably little.
Today’s “wet-coating” process still resembles how it was done decades ago: active powders mixed with toxic solvents, spread as slurries onto metal foil, and dried in industrial ovens the length of a football field. A 50-gigawatt-hour cell factory—enough for about a million EVs per year—can require 50 megawatts of continuous power just for those ovens, according to a 2022 study in the Journal of Power Sources. That’s equivalent to the electricity demand of roughly 40,000 homes, the U.S. Energy Information Administration notes. The environmental and capital costs are enormous.
Rethinking the factory floor
That’s why the industry’s attention is turning toward dry electrode manufacturing. In principle, eliminating solvents from electrode coating could cut both energy use and cost, while shrinking factory footprints. But getting dry coating to work at scale has proven extremely difficult. Without liquids, it’s hard to mix and spread the fine powders evenly, maintain strong adhesion, and avoid damaging the materials through heat and friction.
At Anaphite, my company (which is located in Bristol, England), we’ve spent nearly five years developing an alternative we call our Dry Coating Precursor (DCP) technology. We start with low-toxicity solvents to disperse materials uniformly, then remove the solvent mechanically before dry coating. The resulting film-forming powder behaves almost like kinetic sand: granular when loose, cohesive under pressure. During manufacturing, it transforms into a smooth, flexible electrode layer that bonds tightly to its current collector.
The payoff is dramatic—an 85 percent reduction in coating-process energy use, up to 40 percent lower cell-production cost, and a 15 percent smaller factory footprint, all without compromising yield or performance. These savings compound rapidly: Percentage points shaved from cell cost can determine whether a vehicle remains niche or achieves true mass-market pricing.
A member of Anaphite’s Cells and Electrodes team prepares battery cells whose electrodes are made with the company’s proprietary Dry Coating Precursor for testing.Anaphite
Parallel paths toward the same goal
Anaphite is not alone in this pursuit. On a recent episode of the Volts podcast, Karl Littau, CTO of San Jose, Calif.–based Sakuù, described his company’s solvent-free “laser-printing” method, which he likens to “frosting a cake—without the mess.” Instead of wet slurries and ovens, Sakuù’s Kavian platform fuses dry powders directly onto foil with heat and pressure. Their approach can print electrodes of nearly any chemistry—LFP, NMC, or even formulations yet to be invented—by simply swapping material cartridges. In pilot programs, Sakuù reports that its process cuts carbon-dioxide emissions by about 55 percent, shrinks factory size by 60 percent, and slashes utility costs by more than half.
Other Players in the Dry-Electrode Race
AM Batteries—AM, headquartered in Billerica, Mass., uses a powder-to-electrode roll-to-roll process that sprays dry active materials directly onto foil. Unlike Anaphite’s pre-treated film-forming powder, AMB skips liquids entirely, bonding particles with a small amount of binder and pressure. It targets continuous high-throughput manufacturing rather than Sakuù’s modular printers. The company is developing pilot [AH6] lines with cell makers in North America and Asia.
LiCAP Technologies—The Sacramento, Calif.–based company’s Activated Dry Electrode process forms electrode sheets under heat and pressure. LiCAP has commissioned a 300-MWh dry-coating line in California and is partnering with European equipment suppliers to scale up.
The machines themselves are modular and compact—“They could go in a garage,” Littau says—allowing manufacturers to scale production by adding units rather than constructing vast, energy-hungry facilities. While Anaphite and Sakuù take different engineering routes, the destination is the same: a low-cost, low-energy, high-throughput future for battery manufacturing.
Why It Matters
Dry coating unlocks other advantages as well. It enables thicker electrodes, which reduce the proportion of inactive materials and increase both gravimetric and volumetric energy density. The result: batteries that offer higher range per kilogram and per cubic centimeter. Combine that with EVs’ inherent benefits—quietness, smoothness, and low operating costs—and the case for electrification becomes irresistible.
Whether through DCP, Kavion, or the next breakthrough waiting in a lab somewhere, the dry-coating revolution promises to make clean mobility truly mainstream—bringing forward the day when buying an EV isn’t just the cleaner choice; it’s the obvious one.
If a data center is moving in next door, you probably live in the United States. More than half of all upcoming global data centers—as indicated by land purchased for data centers not yet announced, those under construction, and those whose plans are public—will be developed in the United States.
And these figures are likely underselling the near-term data-center dominance of the United States. Power usage varies widely among data centers, depending on land availability and whether the facility will provide xhttps://spectrum.ieee.org/data-center-liquid-cooling or mixed-use services, says Tom Wilson, who studies energy systems at the Electric Power Research Institute. Because of these factors, “data centers in the U.S. are much larger on average than data centers in other countries,” he says.
Wilson adds that the dataset you see here—which comes from the analysis firm Data Center Map—may undercount new Chinese data centers because they are often not announced publicly. Chinese data-center plans are “just not in the repository of information used to collect data on other parts of the world,” he says. If information about China were up-to-date, he would still expect to see “the U.S. ahead, China somewhat behind, and then the rest of the world trailing.”
One thing that worries Wilson is whether the U.S. power grid can meet the rising energy demands of these data centers. “We’ve had flat demand for basically two decades, and now we want to grow. It’s a big system to grow,” he notes.
He thinks the best solution is asking data centers to be more flexible in their power use, maybe by scheduling complex computation for off-peak times or maintaining on-site batteries, removing part of the burden from the power grid. Whether such measures will be enough to keep up with demand remains an open question.
Every September as we plan our January tech forecast issue, IEEESpectrum’s editors survey their beats and seek out promising projects that could solve seemingly intractable problems or transform entire industries.
Often these projects fly under the radar of the popular technology press, which these days seems more interested in the personalities driving Big Tech companies than in the technology itself. We go our own way here, getting out into the field to bring you news of the hidden gems that genuinely—as the IEEE motto goes—advance technology for the benefit of humanity.
A look back at the last 20 years of January issues reveals that while we’ve certainly covered our share of huge tech projects, like the James Webb Space Telescope, many of the stories touch on subjects most people would have otherwise missed.
Last January, Senior Associate Editor Emily Waltz reported on startups that are piloting ocean-based carbon capture. This issue, she’s back with another CO2-centric story, this time focused on grid-scale storage, which is poised to blow up—literally. Waltz traveled to Sardinia to check out Milan-based Energy Dome’s “bubble battery,” which can store up to 200 megawatt-hours by compressing and decompressing pure carbon dioxide inside an inflatable dome.
This kind of modular, easy-to-deploy energy storage could be especially useful for AI data centers, says Senior Editor Samuel K. Moore, who curated this issue and wrote about gravity energy storage back in January 2021.
Big bubbles could help with grid-scale storage; tiny bubbles can liquefy cancer tumors.
“When we think about energy storage, our minds usually go to grid-scale batteries,” Moore says. “Yet these bubbles, which are in many ways more capable than batteries, will be sprouting up all over the place, often in association with computing infrastructure.”
For his story in this issue, Moore dove into the competition between two startups that are developing radio-based cables to replace conventional copper cables and fiber optics in data centers. These radio systems can connect processors 10 to 20 meters apart using a third of the power of optical-fiber cables and at a third of the cost. The next step is to integrate the radio connections directly with GPUs, to ease cooling burdens and help data centers and the AI models running on them continue to scale up.
Big bubbles could help with grid-scale storage; tiny bubbles can liquify cancer tumors, as Greg Uyeno found when reporting on HistoSonics’ ultrasound treatment. Feared for its aggressive nature and extremely low survival rate, pancreatic cancer kills almost half a million people per year worldwide. HistoSonics uses noninvasive, focused ultrasound to create cavitation bubbles that destroy tumors without dangerously heating surrounding tissue. This year, the company is concluding kidney trials as well as launching pancreatic cancer trials.
Over the last two decades, Spectrum has regularly covered the rise of drones. In 2018, for instance, we reported that the startup Zipline would deploy autonomous drones to deliver blood and medical supplies in rural Rwanda. Today, Zipline has a market cap of about US $4 billion and operates in several African countries, Japan, and the United States, having completed almost 2 million drone deliveries. In this issue, journalist Robb Mandelbaum takes us inside the Wildfire XPrize competition, aimed at providing another life-saving service: dousing wildfires before they grow out of control. Zipline succeeded because it could make deliveries to remote locations much faster than land vehicles. This year’s XPrize teams plan to detect and suppress fires faster than conventional firefighting methods.
Charging an EV at home doesn’t seem like an inconvenience—until you find yourself dragging a cord around a garage or down a rainy driveway, then unplugging and coiling it back up every time you drive the kids to school or run an errand. For elderly or disabled drivers, those bulky cords can be a physical challenge.
As it was for smartphones years ago, wireless EV charging has been the dream. But there’s a difference of nearly four orders of magnitude between the roughly 14 watt-hours of a typical smartphone battery and that of a large EV. That’s what makes the wireless charging on the 108-kilowatt-hour pack in the forthcoming Porsche Cayenne Electric so notable.
To offer the first inductive charger on a production car, Porsche had to overcome both technical and practical challenges—such as how to protect a beloved housecat prowling below your car. The German automaker demonstrated the system at September’s IAA Mobility show in Munich.
This article is part of our special report Top Tech 2026.
With its 800-volt architecture, the Cayenne Electric can charge at up to 400 kW at a public DC station, enough to fill its pack from 10 to 80 percent in about 16 minutes. The wireless system delivers about 11 kW for Level 2 charging at home, where Porsche says three out of four of its customers do nearly all their fill-ups. Pull the Cayenne into a garage and align it over a floor-mounted plate, and the SUV will charge from 10 to 80 percent in about 7.5 hours. No plugs, tangled cords, or dirty hands. Porsche will offer a single-phase, 48-ampere version for the United States after buyers see their first Cayennes in mid-2026, and a three-phase, 16-A system in Europe.
Porsche’s Wireless Charging is Based on an Old Concept
The concept of inductive charging has been around for more than a century. Two coils of copper wire are positioned near one another. A current flowing through one coil creates a magnetic field, which induces voltage in the second coil.
In the Porsche system, the floor-mounted pad, 78 centimeters wide, plugs into the home’s electrical panel. Inside the pad, which weighs 50 kilograms, grid electricity (at 60 hertz in the United States, 50 Hz in most of the rest of the world) is converted to DC and then to high-frequency AC at 2,000 V.The resulting 85-kilohertz magnetic field extends from the pad to the Cayenne, where it is converted again to DC voltage.
The waterproof pad can also be placed outdoors, and the company says it’s unaffected by leaves, snow, and the like. In fact, the air-cooled pad can get warm enough to melt any snow, reaching temperatures as high as 50 °C.
The Cayenne’s onboard charging hardware mounts between its front electric motor and battery. The 15-kg induction unit wires directly into the battery.
In most EVs, plug-in (conductive) AC charging tops out at around 95 percent efficiency. Porsche says its wireless system delivers 90 percent efficiency, despite an air gap of roughly 12 to 18 cm between the pad and vehicle.
Last year, Oak Ridge National Laboratory transferred an impressive 270 kilowatts to a Porsche Taycan with 95 percent efficiency.
“We’re super proud that we’re just below conductive AC in charging efficiency,” says Simon Schulze, Porsche’s product manager for charging hardware. Porsche also beats inductive phone chargers, which typically max out at about 70 percent efficiency, Schulze says.
When the car gets within 7.5 meters of the charging pad, the Cayenne’s screen-based parking-assist system turns on automatically. Then comes a kind of video game that requires the driver to align a pair of green circles on-screen, one representing the car, the other the pad. It’s like a digital version of the tennis ball some people hang in their garage to gauge parking distance. There’s ample wiggle room, with tolerances of 20 cm left to right, and 15 cm fore and aft. “You can’t miss it,” according to Schulze.
Induction loops detect any objects between the charging plate and the vehicle; such objects, if they’re metal, could heat up dangerously. Radar sensors detect any living things near the pad, and will halt the charging if necessary. People can walk near the car or hop aboard without affecting a charging session.
Christian Holler, Porsche’s head of charging systems, says the system conforms to International Commission on Non-Ionizing Radiation Protection standards for electromagnetic radiation. The field remains below 15 microteslas, so it’s safe for people with pacemakers, Porsche insists. And the aforementioned cat wouldn’t be harmed even if it strayed into the magnetic field, though “its metal collar might get warm,” Schulze says.
The Porsche system’s 90 percent efficiency is impressive but not record-setting. Last year, Oak Ridge National Laboratory (ORNL) transferred 270 kW to a Porsche Taycan with 95 percent efficiency, boosting its state of charge by 50 percent in 10 minutes. That world-record wireless rate relied on polyphase windings for coils, part of a U.S. Department of Energy project that was backed by Volkswagen, Porsche’s parent company.
That effort, Holler says, spawned a Ph.D. paper from VW engineer Andrew Foote. Yet the project had different goals from the one that led to the Cayenne charging system. ORNL was focused on maximum power transfer, regardless of cost, production feasibility, or reliability, he says.
By contrast, designing a system for showroom cars “requires a completely different level of quality and processes,” Holler says.
High Cost Could Limit Adoption
Cayenne buyers in Europe will pay around €7,000 (roughly US $8,100) for the optional charger. Porsche has yet to price it for the United States.
Loren McDonald, chief executive of Chargeonomics, an EV-charging analysis firm, said wireless charging “is clearly the future,” with use cases such as driverless robotaxis, curbside charging, or at any site “where charging cables might be an annoyance or even a safety issue.”
But for now, inductive charging’s costly, low-volume status will limit it to niche models and high-income adopters, McDonald says. Public adoption will be critical “so that drivers can convenience-charge throughout their driving day—which then increases the benefits of spending more money on the system.”
Porsche acknowledges that issue; the system conforms to wireless standards set by the Society of Automotive Engineers so that other automakers might help popularize the technology.
“We didn’t want this to be proprietary, a Porsche-only solution,” Schulze says. “We only benefit if other brands use it.”
Powering the AI data center boom dominated the conversation in the global energy sector in 2025. Governments are racing to develop the most advanced AI models, and data center developers are building as fast as they can. But no one is going to get very far without finding ways to generate and move more electricity to these power guzzlers.
Spectrum’s most popular energy stories in 2025 centered around that theme. Readers were particularly interested in stories about next-generation nuclear power, such as small modular reactors and salt-cooled reactors, and how those technologies might support data centers. Readers also turned to Spectrum to learn about the strain all of this is putting on electricity grids, and new technologies to solve those problems.
Despite the weightiness of the energy sector’s challenges, we found some fun, off-beat stories to tell too. One American company is building the world’s largest airplane—it’s bigger than a football field—and it will have one job: to transport wind turbine blades.
I don’t know what 2026 will bring, but as Spectrum’s energy editor, I’ll do my best to provide you stories that are true, useful, and engaging. Cheers to a new year in energy!
The world suddenly needs more power, but one solution being tested is to downsize energy generation and distribute it more widely. One example of that is small modular reactors (SMRs). These nuclear fission reactors are less than a third of the size and power output of conventional reactors. And as the April deadline approached for applying for the US $900 million the United States was offering for SMR development, readers came to Spectrum in droves to learn about the program in a news article authored by contributor Shannon Cuthrell.
But the SMR money paled in comparison to the $80 billion that the United States is spending on a fleet of large-scale nuclear reactors designed by Westinghouse. Will this next group of reactors suffer from the same delays and cost overruns as the ones that put Westinghouse into bankruptcy just a few years ago? Spectrum brought readers an expert analysis on the subject by Wood MacKenzie’s Ed Crooks.
The United States may have the most SMRs in development, but China has the one that’s furthest along. The Linglong One, on the island of Hainan, is expected to begin operations in the first half of 2026. And that’s just one component in a smorgasbord of nuclear reactor experimentation in China. One of the country’s most interesting projects is a thorium-powered, molten-salt reactor, which it began building in 2025 in the Gobi desert. Prior to this project, the last operating molten-salt reactor was at Oak Ridge National Laboratory, which shut down in 1969.
The attraction of thorium as a fuel is that it reduces dependence on uranium. Very little information is available on the progress of China’s thorium reactor, but with help from our Taiwan-based freelancer Yu-Tzu Chiu, we know it’s small—only 10 megawatts—and is scheduled to be operational by 2030. Check back with Spectrum for updates on this reactor and the Linglong One.
While nuclear reactors need to get smaller, wind turbines need to get bigger, say some renewable-energy advocates. And the biggest obstacle to bigger wind—besides the present political backlash—is transportation. Roads, bridges, and train tracks dictate the size of onshore wind turbine blades, and usually can’t accommodate anything over 70 meters long. That’s why Radia, an aviation startup in Boulder, Colo., is building the world’s largest airplane. It will stretch 108 meters in length, be shaped to hold a 105-meter blade, and can land on a makeshift dirt runway. Spectrum contributor Andrew Moseman traveled to Radia’s headquarters to check out the aircraft’s design and fly the behemoth on the company’s simulator. (Spoiler: He landed it.)
National Grid Electricity Transmission/Smart Wires
None of this new energy generation will matter if we can’t move it across the grid to customers who need it. But many key transmission corridors are maxed. Blackouts are growing longer and more common. Building new transmission lines takes years and often gets thwarted by NIMBY pushback. Queues for connecting to the grid, whether you’re providing power or requesting it, can be comically long.
To bridge the gap, grid operators globally are turning to innovative grid tech. Collectively called grid-enhancing technologies (GETs), some of the boldest examples can be found in the United Kingdom. For example, the U.K.’s National Grid has been implementing electronic power-flow controllers, called SmartValves, that shift electricity from jammed circuits to those with spare capacity.
The U.K. and other countries have also been reconductoring old lines and installing dynamic line rating, which calculates how much current high-voltage lines can safely carry based on real-time weather conditions. And Scotland has been beefing up its grid-scale battery stations with advanced converters. These leap into action within milliseconds to release the extra power needed when energy supply elsewhere on the grid falters. Spectrum contributor Peter Fairley, who authored several of these stories, traveled to the U.K. to investigate grid congestion woes and tech solutions.
At the opposite end of the spectrum, one of the world’s most neglected grids can be found in Cuba. There, decades of poor fuel and maintenance have left the country’s energy infrastructure in crisis. Lately, Cuba’s entire grid has been collapsing every couple of months. Blackouts are so common that citizens are cooking multiple meals at once and working by flashlight, says Ricardo Torres, a Cuban economist who explained the situation for Spectrum readers in this popular expert-authored guest post.
The nearby Caribbean island of Puerto Rico has also been enduring more frequent blackouts, leading some to speculate that the grid in this American territory may go the same way as Cuba’s. The turmoil has prompted widespread development of solar-plus-storage systems across the island that are privately financed, reports Spectrum contributor Julia Tilton.
On the lighter side, we also explored the world of nuclear batteries. These devices store energy in the form of radioactive isotopes. They can last for decades, making them ideal for medical implants, remote infrastructure, robots, and sensors. But the allure of a small battery with a 50-year lifespan has given this sector several false starts. There was a stint in the 1970s where surgeons implanted nuclear-powered pacemakers in over 1,400 people only to lose track of them over time. Regulators balked when devices containing plutonium-238 started turning up in crematoriums and coffins.
Now the field is experiencing a resurgence in interest. Companies on multiple continents are claiming to be on the verge of commercialization of nuclear batteries. Whether they’ll find willing markets is unclear. In a feature for Spectrum, nuclear battery expert James Blanchard details the history of these devices and why there’s suddenly more activity in this field than he’s ever seen in his 40-year career.
Sometimes a story is so good that we just have to publish it, even if we find it somewhere else. That was the case with a chapter from the book Inevitable: Inside the Messy, Unstoppable Transition to Electric Vehicles (Harvard Business Review Press, 2025). The chapter tells the tale of one power-train engineer at Ford whose internal-combustion-engine expertise slowly became expendable as car companies pivoted to EVs. With permission, we published an adapted version of the chapter, which is chock-full of excellent reporting from author Mike Colias, a veteran automotive reporter. Don’t miss it! (Spoiler: The engineer, Lem Yeung, who left Ford after 30 years, ended up returning to the company a few years later to help clean up the mess caused by the loss of old-school talent. We caught up with Yeung after his return in this Q&A.)
Achieve reliable hermetic sealing for millimeter-scale microbatteries using dual-seal epoxy adhesive methods that maximize energy density while preventing electrolyte leakage and moisture ingress.
What Attendees will Learn
“Seal smart, not complex” -- Dual-seal approach combines epoxy adhesives with gaskets for optimal hermeticity.
2mm breakthrough -- Successfully demonstrated microbatteries operating at 120°C with 22-hour continuous performance.
Energy density maximized -- Surface-area-to-volume optimization maintains high Wh/L and Wh/kg ratios.
Proven materials -- Epoxy adhesives with Kapton/neoprene gaskets deliver chemical resistance and low permeability.
This giant bubble on the island of Sardinia holds 2,000 tonnes of carbon dioxide. But the gas wasn’t captured from factory emissions, nor was it pulled from the air. It came from a gas supplier, and it lives permanently inside the dome’s system to serve an eco-friendly purpose: to store large amounts of excess renewable energy until it’s needed.
Developed by the Milan-based company Energy Dome, the bubble and its surrounding machinery demonstrate a first-of-its-kind “CO2 Battery,” as the company calls it. The facility compresses and expands CO2 daily in its closed system, turning a turbine that generates 200 megawatt-hours of electricity, or 20 MW over 10 hours. And in 2026, replicas of this plant will start popping up across the globe.
We mean that literally. It takes just half a day to inflate the bubble. The rest of the facility takes less than two years to build and can be done just about anywhere there’s 5 hectares of flat land.
This article is part of our special report Top Tech 2026.
The first to build one outside of Sardinia will be one of India’s largest power companies, NTPC Limited. The company expects to complete its CO2 Battery sometime in 2026 at the Kudgi power plant in Karnataka, in India. In Wisconsin, meanwhile, the public utility Alliant Energy received the all clear from authorities to begin construction of one in 2026 to supply power to 18,000 homes.
And Google likes the concept so much that it plans to rapidly deploy the facilities in all of its key data-center locations in Europe, the United States, and the Asia-Pacific region. The idea is to provide electricity-guzzling data centers with round-the-clock clean energy, even when the sun isn’t shining or the wind isn’t blowing. The partnership with Energy Dome, announced in July, marked Google’s first investment in long-duration energy storage.
“We’ve been scanning the globe seeking different solutions,” says Ainhoa Anda, Google’s senior lead for energy strategy, in Paris. The challenge the tech giant has encountered is not only finding a long-duration storage option, but also one that works with the unique specs of every region. “So standardization is really important, and this is one of the aspects that we really like” about Energy Dome, she says. “They can really plug and play this.”
Google will prioritize placing the Energy Dome facilities where they’ll have the most impact on decarbonization and grid reliability, and where there’s a lot of renewable energy to store, Anda says. The facilities can be placed adjacent to Google’s data centers or elsewhere within the same grid. The companies did not disclose the terms of the deal.
Anda says Google expects to help the technology “reach a massive commercial stage.”
Getting creative with long-duration energy storage
All this excitement is based on Energy Dome’s one full-size, grid-connected plant in Ottana, Sardinia, which was completed in July. It was built to help solve one of the energy transition’s biggest challenges: the need for grid-scale storage that can provide power for more than 8 hours at a time. Called long-duration energy storage, or LDES in industry parlance, the concept is the key to maximizing the value of renewable energy.
When sun and wind are abundant, solar and wind farms tend to produce more electricity than a grid needs. So storing the excess for use when these resources are scarce just makes sense. LDES also makes the grid more reliable by providing backup and supplementary power.
The problem is that even the best new grid-scale storage systems on the market—mainly lithium-ion batteries—provide only about 4 to 8 hours of storage. That’s not long enough to power through a whole night, or multiple cloudy and windless days, or the hottest week of the year, when energy demand hits its peak.
After the CO2 leaves the dome, it is compressed, cooled, reduced to a liquid, and stored in pressure vessels. To release the energy, the process reverses: The liquid is evaporated, heated, expanded, and then fed through a turbine that generates electricity. Luigi Avantaggiato
Lithium-ion battery systems could be increased in size to store more and last longer, but systems of that size usually aren’t economically viable. Other grid-scale battery chemistries and approaches are in development, such as sodium-based, iron-air, and vanadium redox flow batteries. But the energy density, costs, degradation, and funding complications have challenged the developers of those alternatives.
The tried-and-true grid-scale storage option—pumped hydro, in which water is pumped between reservoirs at different elevations—lasts for decades and can store thousands of megawatts for days. But these systems require specific topography, a lot of land, and can take up to a decade to build.
CO2 Batteries check a lot of boxes that other approaches don’t. They don’t need special topography like pumped-hydro reservoirs do. They don’t need critical minerals like electrochemical and other batteries do. They use components for which supply chains already exist. Their expected lifetime stretches nearly three times as long as lithium-ion batteries. And adding size and storage capacity to them significantly decreases cost per kilowatt-hour. Energy Dome expects its LDES solution to be 30 percent cheaper than lithium-ion.
China has taken note. China Huadian Corp. and Dongfang Electric Corp. are reportedly building a CO2-based energy-storage facility in the Xinjiang region of northwest China. Media reports show renderings of domes but give widely varying storage capacities—including 100 MW and 1,000 MW. The Chinese companies did not respond to IEEE Spectrum’s requests for information.
“What I can say is that they are developing something very, very similar [to Energy Dome’s CO2 Battery] but quite large in scale,” says Claudio Spadacini, Energy Dome’s founder and CEO. The Chinese companies “are good, they are super fast, and they have a lot of money,” he says.
Why is Google investing in CO2 Batteries?
When I visited Energy Dome’s Sardinia facility in October, the CO2 had just been pumped out of the dome, so I was able to peek inside. It was massive, monochromatic, and pretty much empty. The inner membrane, which had been holding the uncompressed CO2, had collapsed across the entire floor. A few pockets of the gas remained, making the off-white sheet billow up in spots.
Meanwhile, the translucent outer dome allowed some daylight to pass through, creating a creamy glow that enveloped the vast space. With no structural framing, the only thing keeping the dome upright was the small difference in pressure between the inside and outside air.
“This is incredible,” I said to my guide, Mario Torchio, Energy Dome’s global marketing and communications director.
“It is. But it’s physics,” he said.
Outside the dome, a series of machines connected by undulating pipes moves the CO2 out of the dome for compressing and condensing. First, a compressor pressurizes the gas from 1 bar (100,000 pascals) to about 55 bar (5,500,000 pa). Next, a thermal-energy-storage system cools the CO2 to an ambient temperature. Then a condenser reduces it into a liquid that is stored in a few dozen pressure vessels, each about the size of a school bus. The whole process takes about 10 hours, and at the end of it, the battery is considered charged.
To discharge the battery, the process reverses. The liquid CO2 is evaporated and heated. It then enters a gas-expander turbine, which is like a medium-pressure steam turbine. This drives a synchronous generator, which converts mechanical energy into electrical energy for the grid. After that, the gas is exhausted at ambient pressure back into the dome, filling it up to await the next charging phase.
Energy Dome engineers inspect the dryer system, which keeps the gaseous CO₂ in the dome at optimal dryness levels at all times.Luigi Avantaggiato
It’s not rocket science. Still, someone had to be the first to put it together and figure out how to do it cost-effectively, which Spadacini says his company has accomplished and patented. “How we seal the turbo machinery, how we store the heat in the thermal-energy storage, how we store the heat after condensing…can really cut costs and increase the efficiency,” he says.
The company uses pure, purpose-made CO2 instead of sourcing it from emissions or the air, because those sources come with impurities and moisture that degrade the steel in the machinery.
What happens if the dome is punctured?
On the downside, Energy Dome’s facility takes up about twice as much land as a comparable capacity lithium-ion battery would. And the domes themselves, which are about the height of a sports stadium at their apex, and longer, might stand out on a landscape and draw some NIMBY pushback.
And what if a tornado comes? Spadacini says the dome can withstand wind up to 160 kilometers per hour. If Energy Dome can get half a day’s warning of severe weather, the company can just compress and store the CO2 in the tanks and then deflate the outer dome, he says.
If the worst happens and the dome is punctured, 2,000 tonnes of CO2 will enter the atmosphere. That’s equivalent to the emissions of about 15 round-trip flights between New York and London on a Boeing 777. “It’s negligible compared to the emissions of a coal plant,” Spadacini says. People will also need to stay back 70 meters or more until the air clears, he says.
Worth the risk? The companies lining up to build these systems seem to think so.
This article appears in the January 2026 print issue as “Grid-Scale CO2 Batteries Will Take Off in 2026.”
Demand for electricity is up in the United States, and so is its price. One way to increase supply and lower costs is to build new power plants, but that can take years and cost a fortune. Talgat Kopzhanov is working on a faster, more affordable solution: the generator replacement interconnection process.
The technique links renewable energy sources to the grid connections of shuttered or underutilized power facilities and coal plants. The process uses the existing interconnection rights and infrastructure when generating electricity, eliminating the years-long approval process for constructing new U.S. power facilities.
Talgat Kopzhanov
Employer
Middle River Power, in Chicago
Job title
Asset manager
Member grade
Senior member
Alma maters
Purdue University in West Lafayette, Ind., and Indiana University in Bloomington
Kopzhanov, an IEEE senior member, is an asset manager for Middle River Power, based in Chicago. The private equity–sponsored investment and asset management organization specializes in U.S. power generation assets.
“Every power plant has its own interconnection rights,” he says, “but, amazingly, most are not fully utilizing them.” Interconnection rights give a new power source—such as solar energy—permission to connect to a high-voltage transmission system.
“We build the new renewable energy resources on top of them,” Kopzhanov says. “It’s like colocating a new power plant.”
He recently oversaw the installation of two generator-replacement interconnection projects, one for a solar system in Minnesota and the other for a battery storage facility in California.
A fast-track approach that cuts costs
Artificial intelligence data centers are driving up demand and raising electricity bills globally. Although tech companies and investors are willing to spend trillions of U.S. dollars constructing new power facilities, it can take up to seven years just to secure the grid interconnection rights needed to start building a plant, Kopzhanov says. The lengthy process involves system planning, permit requests, and regulatory approvals. Only about 5 percent of new projects are approved each year, he says, in part because of grid reliability issues.
The interconnection technique takes about half the time, he says, bringing cleaner energy online faster. By overcoming interconnection bottlenecks, such as major transmission upgrades that delay renewable projects, the process speeds up project timelines and lowers expenses.
Power Engineers Are In Short Supply
If you want to work in a secure, recession-proof industry, consider a career in power engineering, Kopzhanov says—especially in an unstable job market, when even Amazon, Microsoft, and other large companies are laying off thousands of engineers.
The power industry desperately needs engineers. The global power sector will require between 450,000 and 1.5 million more engineers by 2030 to build, implement, and operate energy infrastructure, according to an IEEE Spectrumarticle based on a study conducted this year of the power engineering workforce by the IEEE Power & Energy Society.
One of the reasons for the shortage, Kopzhanov says, is that the power sector doesn’t seem exciting to young engineers.
“It has not been popular because the technologies we’re implementing nowadays were invented quite a long time ago,” he says. “So there were not too many recent innovations.”
But with new technologies being introduced, such as the generator replacement interconnection process, now is a great time to get into the industry, he says.
“We are facing lots of different kinds of interesting and big challenges, and we definitely need power engineers who can solve them, such as the supply and demand situation facing us,” he says. “We need right-minded people who can deal with that.
“Until this point, the marvelous engineering systems that have been designed and built with close to 100-percent reliability are not going to be the case moving forward, so we have to come up with innovative approaches.”
Just because you have a power engineering degree, however, doesn’t mean you have to work as a power engineer, he says.
“Most students might assume they will have to dedicate themselves to only being a power engineer for the rest of their life—which is not the case,” he says. “You can be on the business side or be an asset manager like me.
“The power sector is an extremely dynamic and vast area. You’ll have many paths to pursue along your career journey.”
Kopzhanov has been involved with several recent generator replacement interconnection installations. In May a large-scale solar project in Minnesota replaced a retiring coal plant with approximately 720 megawatts of solar-powered generators, making it the largest solar-generating facility in the region. The first 460 MW of capacity is expected to be operational soon.
Another new installation, developed with Middle River, is a portfolio of battery storage projects colocated with natural gas facilities in California. It used existing and incremental interconnection capacity to add the storage system. The surplus renewable energy from the batteries will be used during peak times to reduce the plant’s greenhouse gas emissions, according to a Silicon Valley Clean Energy article about the installation.
“These projects are uniquely positioned to be colocated with existing power plants,” Kopzhanov says. “But, at the same time, they are renewable and sustainable sources of power—which is also helping to decarbonize the environment and meet the emission-reduction goals of the state.”
Influenced by Kazakhstan’s power industry
Born and raised in Taraz, Kazakhstan, Kopzhanov was surrounded by relatives who worked in the power industry. It’s not surprising that he has pursued a career in the field.
Until 1991, when the country was still a Soviet republic, most Kazakhs were required to help build the country’s power and transmission systems, he says. His mother and father are chemical engineers, and his grandfather was involved in the power industry. They told him about how they designed the transformers and overhead power lines. From a young age, he knew he wanted to be an engineer too, he says.
Today the Central Asian country is a major producer of oil, gas, and coal.
Kopzhanov left Kazakhstan in 2008 to pursue a bachelor’s degree in electrical engineering at Purdue University, in West Lafayette, Ind.
After graduating in 2012, he was hired as an electrical design engineer by Fluor Corp. in Farnborough, England. He oversaw the development of a master plan for a power project there. He also engineered and designed high-voltage switchgears, substations, and transformers.
“Every power plant has its own interconnection rights but, amazingly, most are not fully utilizing them.”
In 2015 he joined ExxonMobil in Houston, working as a project manager. During his six years there, he held managerial positions. Eventually, he was promoted to asset advisor and was responsible for evaluating the feasibility of investing in decarbonization and electrification projects by identifying their risks and opportunities.
He decided he wanted to learn more about the business aspects of running a company, so he left in 2021 to pursue an MBA at Indiana University’s Kelley School of Business, in Bloomington. During his MBA program, he briefly worked as a consultant for a lithium-ion manufacturing firm, offering advice on the viability of their proposed projects and investments.
“Engineers aren’t typically connected to the business world,” he says, “but having an understanding of what the needs are and tailoring your future goals toward that is extremely important. In my view, that’s how you’ll become a great technical expert. I definitely recommend that engineers have some kind of understanding of the business side.”
He joined Middle River shortly after graduating from Indiana with his MBA in 2023.
The power of membership
Kopzhanov was introduced to IEEE by a colleague at ExxonMobil after he asked the member about an IEEE plaque displayed on his desk. The coworker explained the activities he was involved in, as well as the process for joining. Kopzhanov became a member in 2019, left, and then rejoined in 2023.
“That was one of the best decisions I have made,” he says.
A member of the IEEE Power & Energy Society, he says its publications, webinars, conferences, and networking events keep him current on new developments.
“Being able to follow what’s happening in the industry, especially in the space where you’re working, is something that has benefited me a lot,” he says.
He has helped organize conferences and reviews research papers.
“It’s those little things that have a significant impact,” he says. “Volunteering is a key piece of belonging to IEEE.”
Fast, direct-current charging can charge an EV’s battery from about 20 percent to 80 percent in 20 minutes. That’s not bad, but it’s still about six times as long as it takes to fill the tank of an ordinary petrol-powered vehicle.
One of the major bottlenecks to even faster charging is cooling, specifically uneven cooling inside big EV battery packs as the pack is charged. Hydrohertz, a British startup launched by former motorsport and power-electronics engineers, says it has a solution: fire liquid coolant exactly where it’s needed during charging. Its solution, announced in November, is a rotary coolant router that fires coolant exactly where temperatures spike, and within milliseconds—far faster than any single-loop system can react. In laboratory tests, this cooling tech allowed an EV battery to safely charge in less than half the time than was possible with conventional cooling architecture.
A Smarter Way to Move Coolant
Hydrohertz calls its solution Dectravalve. It looks like a simple manifold, but it contains two concentric cylinders and a stepper motor to direct coolant to as many as four zones within the battery pack. It’s installed in between the pack’s cold plates, which are designed to efficiently remove heat from the battery cells through physical contact, and the main coolant supply loop, replacing a tangle of valves, brackets, sensors, and hoses.
To keep costs low, Hydrohertz designed Dectravalve to be produced with off-the-shelf materials, and seals, as well as dimensional tolerances that can be met with the fabrication tools used by many major parts suppliers. Keeping things simple and comparatively cheap could improve Dectravalve’s chances of catching on with automakers and suppliers notorious for frugality. “Thermal management is trending toward simplicity and ultralow cost,” says Chao-Yang Wang, a mechanical and chemical engineering professor at Pennsylvania State University whose research areas include dealing with issues related to internal fluids in batteries and fuel cells. Automakers would prefer passive cooling, he notes—but not if it slows fast charging. So, at least for now, Intelligent control is essential.
“If Dectravalve works as advertised, I’d expect to see a roughly 20 percent improvement in battery longevity, which is a lot.”–Anna Stefanopoulou, University of Michigan
Hydrohertz built Dectravalve to work with ordinary water-glycol, otherwise known as antifreeze, keeping integration simple. Using generic antifreeze avoids a step in the validation process where a supplier or EV manufacturer would otherwise have to establish whether some special formulation is compatible with the rest of the cooling system and doesn’t cause unforeseen complications. And because one Dectravalve can replace the multiple valves and plumbing assemblies of a conventional cooling system, it lowers the parts count, reduces leak points, and cuts warranty risk, Hydrohertz founder and CTO Martyn Talbot claims. The tighter thermal control also lets automakers shrink oversize pumps, hoses, and heat exchangers, improving both cost and vehicle packaging.
The valve reads battery-pack temperatures several times per second and shifts coolant flow instantly. If a high-load event—like a fast charge—is coming, it prepositions itself so more coolant is apportioned to known hot spots before the temperature rises in them.
Multizone control can also speed warm-up to prevent the battery degradation that comes from charging at frigid temperatures. “You can send warming fluid to heat half the pack fast so it can safely start taking load,” says Anna Stefanopoulou, a professor of mechanical engineering at the University of Michigan who specializes in control systems, energy, and transportation technologies. That half can begin accepting load, while the system begins warming the rest of the pack more gradually, she explains. But Dectravalve’s main function remains cooling fast-heating troublesome cells so they don’t slow charging.
Quick response to temperature changes inside the battery doesn’t increase the cooling capacity, but it leverages existing hardware far more efficiently. “Control the coolant with more precision and you get more performance for free,” says Talbot.
Charge Times Can Be Cut By 60 Percent
In early 2025, the Dectravalve underwent bench testing conducted by the Warwick Manufacturing Group (WMG), a multidisciplinary research center at the University of Warwick, in Coventry, England, that works with transport companies to improve the manufacturability of battery systems and other technologies. WMG compared Dectravalve’s cooling performance with that of a conventional single-loop cooling system using the same 100-kilowatt-hour battery pack. During fast-charge trials from 10 percent to 80 percent, Dectravalve held peak cell temperature below 44.5 °C and kept cell-to-cell temperature variation to just below 3 °C without intervention from the battery management system. Similar thermal performance for the single-loop system was made possible only by dialing back the amount of power the battery would accept—the very tapering that keeps fast charging from being on par with gasoline fill-ups.
Keeping the cell temperatures below 50 °C was key, because above that temperature lithium plating begins. The battery suffers irreversible damage when lithium starts coating the surface of the anode—the part of the battery where electrical charge is stored during charging—instead of filling its internal network of pores the way water does when it’s absorbed by a sponge. Plating greatly diminishes the battery’s charge-storage capacity. Letting the battery get too hot can also cause the electrolyte to break down. The result is inhibited flow of ions between the electrodes. And reduced flow within the battery means reduced flow in the external circuit, which powers the vehicle’s motors.
Because the Dectravalve kept temperatures low and uniform—and the battery management system didn’t need to play energy traffic cop and slow charging to a crawl to avoid overheating—charging time was cut by roughly 60 percent. With Dectravalve, the battery reached 80 percent state of charge in between 10 and 13 minutes, versus 30 minutes with the single-cooling-loop setup, according to Hydrohertz.
When Batteries Keep Cool, They Live Longer
Using Warwick’s temperature data, Hydrohertz applied standard degradation models and found that cooler, more uniform packs last longer. Stefanopoulou estimates that if Dectravalve works as claimed, it could boost battery life by roughly 20 percent. “That’s a lot,” she says.
Still, it could be years before the system shows up on new EVs, if ever. Automakers will need years of cycle testing, crash trials, and cost studies before signing off on a new coolant architecture. Hydrohertz says several EV makers and battery suppliers have begun validation programs, and CTO Talbot expects licensing deals to ramp up as results come in. But even in a best-case scenario, Dectravalve won’t be keeping production-model EV batteries cool for at least three model years.
The United States aims to embark on its most active new nuclear construction program since the 1970s. In its most high-dollar nuclear deal yet, the Trump administration in October launched a partnership to build at least US $80 billion worth of new, large-scale nuclear reactors, and chose Westinghouse Electric Company and its co-owners, Brookfield Asset Management and Cameco, for the job.
The money will support the construction of AP1000s, a type of pressurized water reactor developed by Westinghouse that can generate about 1,110 megawatts of electric power. These are the same reactors as units 3 and 4 at the Vogtle nuclear plant in Georgia, which wrapped up seven years behind schedule in 2023 and 2024 and cost more than twice as much as expected—about $35 billion for the pair. Along the way, Westinghouse, based in Cranberry Township, Penn., filed for Chapter 11 bankruptcy protection.
Chief executives of investor-owned utilities know that if they were to propose committing to similar projects on the same commercial terms, they’d be sacked on the spot. As a result, the private sector in the United States has been unwilling to take on the financial risk inherent in building new reactors.
The $80 billion deal with the federal government represents the U.S. nuclear industry’s best opportunity in a generation for a large-scale construction program. But ambition doesn’t guarantee successful execution. The delays and cost overruns that dogged the Vogtle project present real threats for the next wave of reactors.
Streamlining AP1000 Reactor Construction
What might be different about the next set of AP1000s? On the positive side, delivering multiple copies of the same reactor ought to create the conditions for a steady decline in costs. Vogtle Unit 3 was the first AP1000 to be built in the United States, and the lessons learned from it resulted in Vogtle Unit 4 costing 30 percent less than Unit 3. (Six AP1000s are currently operating outside the United States, and 14 more are under construction, according to Westinghouse.)
There’s been a bipartisan effort in the United States to streamline regulatory procedures to ensure that future projects won’t be delayed by the same issues that hampered Vogtle. The Accelerating Deployment of Versatile, Advanced Nuclear for Clean Energy (ADVANCE) Act that was signed into law by former U.S. President Joe Biden in 2024 includes several measures intended to improve processes at the Nuclear Regulatory Commission (NRC).
The last nuclear reactors to be built in the United States—Vogtle Units 3 and 4 in Waynesboro, Georgia—were completed seven years behind schedule and cost more than twice as much as expected.Georgia Power Co.
That included a mandated change in the NRC’s mission statement, setting a goal of “enabling the safe and secure use and deployment of civilian nuclear energy technologies.” It was a symbol of Congress’s intent to encourage the commission to support nuclear development.
In May, President Trump built on that legislation with four executive orders intended to speed up reactor licensing and accelerate nuclear development—a framework that has yet to be tested in practice. In November, the NRC published regulations setting out how it planned to implement the president’s orders. The changes are focused on removing redundant and duplicative rules.
One of Trump’s orders included a series of provisions intended to help build the U.S. nuclear workforce, but it’s clear that will be a challenge. The momentum gained in training skilled workers during the construction at Vogtle is already dissipating. Without other active new reactor projects to move on to immediately in the United States, many of the people who worked there have likely gone into other sectors, such as liquified natural gas (LNG) plants.
Around the time that construction was wrapping up at Vogtle, many employers in the industry were already reporting difficulties in finding the staff they need, according to the Department of Energy’s 2025 United States Energy and Employment Report. Surveyed in 2024, 22 percent of employers in nuclear construction said it was “very difficult” to hire the workers they needed, and 63 percent said it was “somewhat difficult.” In nuclear manufacturing, 63 percent of employers said hiring was “very difficult.”
If reactor construction really begins to pick up, there is clearly a danger that those numbers will rise.
U.S. Nuclear Power Expansion Plans
So just how many reactors will $80 billion buy? Assuming an average of $16 billion per AP1000—slightly less than for Vogtle, and allowing for cost reductions from economies of scale and learning-by-doing—the plan would mean five new reactors. That would represent an increase of about 5.7 percent in total U.S. nuclear energy generation capacity, if all the reactors currently in service remain online.
The full details of the $80 billion deal, including the precise allocation of financing and risk-sharing, have not been specified. But Westinghouse’s co-owner, Brookfield, did disclose that the partnership includes profit-sharing mechanisms that will give the U.S. government some of the upside if the initiative succeeds.
The Washington Postreported that after the U.S. signs the final contracts for $80 billion worth of new reactors, it will be entitled to 20 percent of all of Westinghouse’s returns over $17.5 billion. And if Westinghouse’s valuation surpasses $30 billion, the administration can require it to be floated on the stock market. If that happens, the government will get a 20 percent stake.
Enriched uranium is loaded at Vogtle Unit 4.Georgia Power Co.
Japan’s government is also playing a key role. As part of a $550 billion U.S.-Japan trade deal struck in July, the Japanese government pledged large-scale investment in U.S. energy, including nuclear. Japanese companies, including Mitsubishi Heavy Industries, Toshiba Group, and IHI Corp., are interested in investing up to $100 billion in the United States to support the construction of new AP1000s and small modular reactors (SMRs), the two governments said.
The Westinghouse deal supports a range of the administration’s objectives, including power for AI and investment and job creation in the American industrial sector. The focus on AP1000s also makes it possible to rely on U.S.-produced fuel, strengthening energy security. (Many of the designs for SMRs, which have garnered a considerable amount of excitement globally, use high-assay, low-enriched uranium (HALEU) fuel, which is not currently produced on a large scale in the United States.)
U.S. Nuclear Energy Investment
There have been other recent moves to add additional nuclear capacity in the United States. Santee Cooper, a South Carolina utility, announced plans for completing the construction of two AP1000 reactors that had been abandoned in 2017 at the V.C. Summer site in Jenkinsville, S.C.
Separately, Google announced in October a deal with NextEra Energy to reopen a 615-MW nuclear plant in Iowa. The Duane Arnold Energy Center was shut down in 2020, and the aim is to have it operational again by the first quarter of 2029. Google has agreed to buy a share of the plant’s output for 25 years.
Construction of two AP1000 reactors at the V.C. Summer nuclear site in Jenkinsville, S.C., was abandoned in 2017 after delays and cost overruns. Executives leading the projects were charged with fraud. Chuck Burton/AP
But the plans that have been announced so far pale in comparison to the Trump administration’s nuclear ambitions. Earlier this year, Trump set a goal of adding a whopping 300 gigawatts of nuclear capacity by 2050, up from a little under 100 GW today. That would mean much stronger growth than is currently projected in Wood Mackenzie’s forecasts, which show a near-doubling of U.S. nuclear generation capacity to about 190 GW in 2050.
The main driver behind the Trump administration’s interest in nuclear is its ambitions for artificial intelligence. Chris Wright, the U.S. energy secretary, has described the race to develop advanced AI as the Manhattan Project of our times, critical to national security and dependent upon a steep increase in electricity generation. Speaking to the Council on Foreign Relations in September, Wright promised: “We’re doing everything we can to make it easy to build power generation and data centers in our country.”
One of the hallmarks of the Trump administration has been its readiness to intervene in markets to pursue its policy goals. Its nuclear strategy exemplifies that approach. In many ways, the Trump administration is acting like an energy company: using its financial strength and its convening power to put together a deal that covers the entire nuclear value chain.
Throughout the history of nuclear power, the industry has worked closely with governments. But the federal government effectively taking a commercial position in the development of new reactors would be a first for the United States. In the first wave of U.S. reactor construction in the 1970s, federal government support was limited to R&D, uranium mining and enrichment, and indemnifying operators against the risk of nuclear accidents.
Before the partial deregulation of U.S. electricity markets that began in the 1990s, utilities could develop nuclear plants with the assurance that the costs could be recovered from customers, even if they went far over budget. With many key markets now at least partially deregulated, nuclear project developers will need other types of guarantees to secure financing and move forward.
The first new plants that result from the $80 billion deal will come online years after Trump has left office. But they could play an important role in boosting U.S. electricity supply and developing advanced AI for decades.
In 1950, the English mathematician Alan Turing devised what he called “the imitation game.” Later dubbed the Turing test, the experiment asks a human participant to conduct a conversation with an unknown partner and try to determine if it’s a computer or a person on the other end of the line. If the person can’t figure it out, the machine passes the Turing test.
Power grid operators are now preparing for their own version of the game. Virtual power plants, which concatenate small, distributed energy resources, are increasingly being tapped to balance electricity supply and demand. The question is: Can they do their job as well as conventional power plants?
Grid operators can now find out by running these power plants through a Turing-like test called the Huels. To pass the Huels test, the performance of a virtual power plant must be indistinguishable from that of a conventional power plant. A human grid operator serves as the judge.
Virtual power plant developer EnergyHub, based in Brooklyn, N.Y., developed the test and outlined it in a white paper released today. “What we’re really trying to do is fool the operators into feeling that these virtual power plants can act and feel and smell like conventional power plants,” says Paul Hines, chief scientist at EnergyHub. “This is a kind of first litmus test.”
What Are Virtual Power Plants (VPPs)?
The virtual-versus-conventional power plant question is a timely one. Virtual power plants, or VPPs, are networks of devices such as rooftop solar panels, home batteries, and smart thermostats that come together through software to collectively supply or conserve electricity.
Unlike conventional power generation systems, which might crank up one big gas plant when electricity demand peaks, VPPs tap into small, widely disbursed equipment. For example, a VPP might harness electricity from hundreds of plugged-in electric vehicles or rooftop solar panels. Or it might direct smart thermostats in homes or businesses to turn down heat or cooling systems to reduce demand.
The technology is emerging at a time when concerns over data centers’ electricity demand is hitting a fever pitch. The consultancy BloombergNEF estimates data-center energy demand in the United States will reach 106 gigawatts by 2035–a 36 percent jump from what it had projected just seven months ago.
How utilities and grid operators will meet the growing demand is unclear and faces challenges on many fronts. Turbines for natural gas plants are backordered, and new nuclear reactors are still years away. Wind and solar, while cheap and fast to build, don’t produce the 24/7 electricity that data centers demand and face an uphill political battle under the Trump administration.
All of this together has created an opening for VPPs, which could add gigawatts to the grid without significantly jacking up electricity rates. “It’s a political issue. If you said you’re going to get electricity costs under control, this is literally the only way to do it in 12 months,” says Jigar Shah, a clean energy investor at Multiplier in Washington, D.C., who led the U.S. Department of Energy’s Loan Programs Office under the Biden administration.
VPPs could also reduce utilities’ need to invest in distribution equipment, avoiding supply chain shortages and inflated costs, Shah says. “There is no other idea that you could possibly deploy in 12 months that would have that big of an impact,” he says.
According to a 2024 U.S. Department of Energy report, VPPs could provide between 80 and 160 gigawatts of capacity across the U.S. by 2030—enough to meet between 10 and 20 percent of peak grid demand.
How Can VPPs Gain Grid Operator Trust?
But first, VPP developers have to win over grid developers. Benchmarks like the Huels test are crucial to building that trust. “In order for us to build our reliance on VPPs, they do need to pass the Huels test, and operators need to be able to count on” the VPPs delivering power when called upon, said Lauren Shwisberg, a principal in the nonprofit research group Rocky Mountain Institute who co-authored a recent report on VPPs and was not involved in the development of the test.
Matthias Huels, an engineer who spent more than four years at EnergyHub, first came up with the idea for the test in 2024. After workshopping the idea with colleagues and, somewhat ironically, ChatGPT, Huels presented the concept to the company.
Huels designed the test subjectively. Currently, in its earliest iteration, it appears to follow a guideline akin to the Supreme Court’s “I know it when I see it” test for what distinguishes pornography from erotic art. That is to say: Passing the test depends on who’s judging. If a grid operator finds the power from a VPP as dependable as electricity from an actual power plant burning gas to produce electrons, then the VPP has passed.
There are four levels to the Huels test. To reach level 1, a VPP must be able to shave off demand from the grid by, for example, successfully scheduling smart thermostats to dial down when the grid faces maximum demand. To reach level 2, a VPP must be able to respond to market and grid data and dial down demand when prices hit a certain level or tap into solar panels or batteries when power is needed. Human decision makers are involved at these levels.
Passing the Huels test comes at level 3. That’s when a VPP can function automatically because it’s proven reliable enough to be indistinguishable from a gas peaker plant–the type of power station that comes online as backup only when the grid is under stress. Passing level 4 involves VPPs acting fully autonomously to adjust output based on a number of actively changing variables throughout the day.
“The imitation game that Alan Turing came up with was: Can a computer fool an interrogator to think it’s actually human even though it’s a computer,” Hines says. “We propose this idea of a test that would allow us to say: Can we fool a grid operator into thinking that the thing that’s actually solving their problems is this aggregation of many devices instead of a big gas plant?”
Can VPPs Mimic Gas Peaker Plants?
Peaker plants only generate power about 5 percent of the time over their lifespans. That makes them easier for VPPs to mimic because, like peaker plants, the limited amount of power that can be made available by demand response or harvested from batteries only provides bursts of power that last a few hours at a time.
Far more difficult is stacking up to a full-scale gas plant, which operates 65 percent of the time or more, or a nuclear plant, which usually operates at least 95 percent of the time. Getting there would involve equipping a VPP network with long-duration storage that could be powered up during the day when solar panels are at peak output and discharged all night long. “You start talking about VPPs with large amounts of batteries that can run 365 days per year,” Hines says. “That’s a road we can go down.”
EnergyHub has been putting its VPP systems through the Huels test. Last year, EnergyHub successfully ran trials with Arizona Public Service, Duke Energy in North Carolina, and National Grid in Massachusetts. In Arizona, EnergyHub’s software dialed into homes with solar panels and smart thermostats and ran air conditioners to “pre-cool” houses during the day when the sun was generating lots of electricity. This allowed the state’s biggest utility to reduce demand during peak hours when residents would typically return home from work to turn on televisions and crank up their air conditioners.
“You have too much power in the middle of the day because of solar, then the early evening comes and you get people ramping up their evening loads right as the solar is ramping down,” Hines says. “You need something that can feather through that schedule. We created something that can do this.”
That lands the company somewhere between a 2 and 3 on the Huels testing scale. Passing level 3 “is going to take a few years,” Hines says.
Spain’s grid operator, Red Eléctrica, proudly declared that electricity demand across the country’s peninsular system was met entirely by renewable energy sources for the first time on a weekday, on 16 April 2025.
Just 12 days later, at 12:33 p.m. on Monday, 28 April, Spain and Portugal’s grids collapsed completely, plunging some 55 million people into one of the largest blackouts the region has ever seen. Entire cities lost electricity in the middle of the day. In the bustling airports of Madrid, Barcelona, and other key hubs, departure boards went blank. No power. No Internet. Even mobile phone service—something most people take for granted—was severely compromised. It was just disconnection and disruption. On the roads, traffic lights stopped functioning, snarling traffic and leaving people wondering when the power would return.
The size and scale of the impact were unsettling, but the scariest part was the speed at which it happened. Within minutes, the whole of the Iberian Peninsula’s energy generation dropped from roughly 25 GW to less than 1.2 GW.
While this may sound like a freak accident, incidents like this will continue to happen, especially given the rapid changes to the electrical grid over the past few decades. Worldwide, power systems are evolving from large centralized generation to a multitude of diverse, distributed generation sources, representing a major paradigm shift. This is not merely a “power” problem but also a “systems” problem. It involves how all the parts of the power grid interact to maintain stability, and it requires a holistic solution.
Power grids are undergoing a massive transformation—from coal- and gas-fired plants to millions of solar panels and wind turbines scattered across vast distances. It’s not just a technology swap. It’s a complete reimagining of how electricity is generated, transmitted, and used. And if we get it wrong, we’re setting ourselves up for more catastrophic blackouts like the one that hit all of Spain and Portugal. The good news is that a solution developed by our group at Illinois Institute of Technology over the past two decades and commercialized by our company, Syndem, has achieved global standardization and is moving into large-scale deployment. It’s called Virtual Synchronous Machines, and it might be the key to keeping the lights on as we transition to a renewable future.
Rapid Deployment of Renewable Energy
TheInternational Energy Agency (IEA) created a Net Zero by 2050 roadmap that calls for nearly 90 percent of global electricity generation to come from renewable, distributed sources, with solar photovoltaic (PV) and wind accounting for almost 70 percent. We are witnessing firsthand a paradigm shift in power systems, moving from centralized to distributed generation.
The IEA projects that renewable power installations will more than double between 2025 and 2030, underscoring the urgent need to integrate renewables smoothly into existing power grids. A key technical nuance is that many distributed energy resources (DERs) produce direct current (DC) electricity, while the grid operates on alternating current (AC). To connect these resources to the grid, inverters convert DC into AC. To understand this further, we need to discuss inverter technologies.
Professor Beibei Ren’s team at Texas Tech University built modules for a SYNDEM test bed with 12 modules and a substation module, consisting of 108 converters. Beibei Ren/Texas Tech University
Most of the inverters currently deployed in the field directly control the current (power) injected to the grid while constantly following the grid voltage, often referred to as grid-following inverters. Therefore, this type of inverter is a current source, meaning that its current is controlled, but its terminal voltage is determined by what it connects to. Grid-following inverters rely on a stable grid to inject power from renewable sources and operate properly. This is not a problem when the grid is stable, but it becomes one when the grid is less stable. For instance, when the grid goes down or experiences severe disturbances, grid-following inverters typically trip off, meaning they don’t provide support when the grid needs them most.
In recent years, attempts to address grid instability have led to the rise of grid-forming inverters. As the name suggests, these inverters could help form the grid. These usually refer to an inverter that controls its terminal voltage, including both the amplitude and frequency, which indirectly controls the current injected into the grid. This inverter behaves as a voltage source, meaning that its terminal voltage is regulated, but its current is determined by what it is connected to. Unlike grid-following inverters, grid-forming inverters can operate independently from the grid. This makes them useful in situations where the grid goes down or isn’t available, such as during blackouts. They can also help balance supply and demand, support voltage, and even restart parts of the grid if it shuts down.
One issue is that the term “grid-forming” means different things to different people. Some of them lack clear physical meaning or robust performance under complex grid conditions. Many grid-forming controls are model-based and may not scale properly in large systems. As a result, the design and control of these inverters can vary significantly. Grid-forming inverters made by different companies may not be interoperable, especially in large or complex power systems, which can include grid-scale battery systems, high-voltage DC (HVDC) links, solar PV panels, and wind turbines. The ambiguity of the term is increasingly becoming a barrier for grid-forming inverters, and no standards have been published yet.
Systemic Challenges When Modernizing the Grid
Let’s zoom out for a moment to examine the broader landscape of structural challenges we need to address when transitioning today’s grid into its future state. This transition is often called the democratization of power systems. Just as in politics, where democracy means everyone has a say, this transition in power systems means that every grid player can play a role. The primary difference between a political democracy and a power system is that the power system needs to maintain the stability of its frequency and voltage. If we apply a purely democratic approach to manage the power grid, it will sow the seeds for potential systemic failure.
The second systemic challenge is compatibility. The current power grid was designed long ago for a few big power plants—not for millions of small, intermittent energy sources like solar panels or wind turbines. Ideally, we’d build a whole new grid to fit today’s needs, but that would bring too much disruption, cost too much, and take too long. The only feasible option is to somehow make various grid players compatible with the grid. To better conceptualize this, think about the invention of the modem, which solved the compatibility issues between computers and telephone systems, or the widespread adoption of USB ports. These inventions made many devices, such as cameras, printers, and phones, compatible with computers.
The third systemic challenge is scalability. It’s one thing to hook up a few solar panels to the grid. It’s entirely different to connect millions of them and still keep everything running safely and reliably. It’s like walking one large dog versus walking hundreds of chihuahuas at once. It is crucial for future power systems to adopt an architecture that can operate at different scales, allowing a power grid to break into smaller grids when needed or reconnect to operate as one grid, all autonomously. This is crucial to ensure resilience during extreme weather events, natural disasters, and/or grid faults.
To address these systemic challenges, the technologies need to undergo a seismic transformation. Today’s power grids are electric-machine-based, with electricity generated by large synchronous machines in centralized facilities, often with slow dynamics. Tomorrow’s grid will run on power electronic converters—small, distributed, and with fast dynamics. It’s a significant change, and one we need to plan for carefully.
The Key Is Synchronization
Traditional fossil fuel power plants use synchronous machines to generate electricity, as they can inherently synchronize with each other or the grid when connected. In other words, they autonomously regulate their speeds and the grid frequency around a preset value, meeting a top requirement of power systems. This synchronization mechanism has underpinned the stable operation and organic expansion of power grids for over a century. So, preserving the synchronization mechanism in today’s grids is crucial for addressing the systemic challenges as we transition from today’s grid into the future.
Unlike traditional power plants, inverters are not inherently synchronous, but they need to be. The key enabling technology is called virtual synchronous machines (VSMs). These are not actual machines, but instead are power electronic converters controlled through special software codes to behave like physical turbines. You can think of them as having the body of power converters with the brain of the older spinning synchronous machines. With VSMs, distributed energy resources can synchronize and support the grid, especially when something unexpected happens.
Syndem’s all-in-one reconfigurable and reprogrammable power electronic converter educational kit.SYNDEM
This naturally addresses the systemic challenges of compatibility and scalability. Like conventional synchronous machines, distributed energy resources are now compatible with the grid and can be integrated at any scale. But it gets better. First, inverters can be added to existing power systems without major hardware changes. Second, VSMs support the creation of small, local energy networks—known as microgrids—that can operate independently and reconnect to the main grid when needed. This flexibility is particularly useful during emergencies or power outages. Lastly, VSMs provide an elegant solution for the common concern about inertia, traditionally provided by large spinning machines that help cushion the grid against sudden changes. By design, VSMs can offer similar or even better characteristics of inertia.
Until now, much of the expert discourse has focused primarily on energy generation. But that’s only half of the equation—the other half is demand: how different loads consume the electricity. Their behavior also plays a crucial role in maintaining grid stability, in particular when generation is powered by intermittent renewable energy sources.
There are many different loads, including motors, internet devices, and lighting, among others. They are physically different but technically have one thing in common: They will all have a rectifier at the front end because motor applications are more efficient with a motor drive, which consists of a rectifier; and internet devices and LED lights consume DC electricity, which needs rectifiers at the front end as well. Like inverters, these rectifiers can also be controlled as VSMs, with the only difference being the direction of the power flow. Rectifiers consume electricity, while inverters supply electricity.
As a result, most generation and consumption facilities in a future grid can be equipped and unified with the same synchronization mechanism to maintain grid stability in a synchronized-and-democratized (SYNDEM) manner. Yes, you read that correctly. Even devices that use electricity—like motors, computers, and LED lights—can play a similar active role in regulating the grid by autonomously adjusting their power demand according to instantaneous grid conditions. A less critical load can adapt its power demand by a larger percentage as needed, even up to 100 percent. In comparison, a more critical load can adjust its power demand at a smaller percentage or maintain its power demand. As a result, the power balance in a SYNDEM grid no longer depends predominantly on adjusting the supply but on dynamically adjusting both the supply and the demand, making it easier to maintain grid stability with intermittent renewable energy sources.
For many loads, it is often not a problem to adjust their demand by 5-10 percent for a short period. Cumulatively, this offers significant support for the grid. Due to the rapid response of VSM, the support provided by such loads is equivalent to inertia and/or spinning reserve—extra power from synchronized generators not at full load. This can reduce the need for large spinning reserves that are currently necessary in power systems and reduce the effort to coordinate generation facilities. It also mitigates the impact of dwindling inertia caused by the retirement of conventional large generating facilities.
In a SYNDEM grid, all active grid players, regardless of size, whether conventional or renewable, supplying or consuming, would follow the same SYNDEM rule of law and play the same equal role in maintaining grid stability, democratizing power systems, and paving the way for autonomous operation. It is worth highlighting that the autonomous operation can be achieved without relying on communication networks or human intervention, lowering costs and improving security.
The SYNDEM architecture takes VSMs to new heights, addressing all three systemic challenges mentioned above: democratization, compatibility, and scalability. With this architecture, you can stack grids at different scales, much like building blocks. Each home grid can be operated on its own, multiple home grids can be connected to form a neighborhood grid, and multiple neighborhood grids can be connected to create a community grid, and so on. Moreover, such a grid can be decomposed into smaller grids when needed and can reconnect to form a single grid, all autonomously, without changing codes or issuing commands.
The holistic theory is established, the enabling technologies are in place, and the governing standard is approved. However, the full realization of VSMs within the SYNDEM architecture depends on joint ventures and global deployment. This isn’t a task for any one group alone. We must act together. Whether you’re a policymaker, innovator, investor, or simply someone who cares about keeping the lights on, you can play a role. Join us to make power systems worldwide stable, reliable, sustainable, and, eventually, fully autonomous.
Across global electricity networks, the shift to renewable energy has fundamentally changed the behavior of power systems. Decades of engineering assumptions, predictable inertia, dispatchable baseload generation, and slow, well-characterized system dynamics, are now eroding as wind and solar become dominant sources of electricity. Grid operators face increasingly steep ramp events, larger frequency excursions, faster transients, and prolonged periods where fossil generation is minimal or absent.
In this environment, battery energy storage systems (BESS) have emerged as essential tools for maintaining stability. They can respond in milliseconds, deliver precise power control, and operate flexibly across a range of services. But unlike conventional generation, batteries are sensitive to operational history, thermal environment, state of charge window, system architecture, and degradation mechanisms. Their long-term behavior cannot be described by a single model or simple efficiency curve, it is the product of complex electrochemical, thermal, and control interactions.
Most laboratory tests and simulations attempt to capture these effects, but they rarely reproduce the operational irregularities of the grid. Batteries in real markets are exposed to rapid fluctuations in power demand, partial state of charge cycling, fast recovery intervals, high-rate events, and unpredictable disturbances. As Professor Dan Gladwin, who leads Sheffield’s research into grid-connected energy storage, puts it, “you only understand how storage behaves when you expose it to the conditions it actually sees on the grid.”
This disconnect creates a fundamental challenge for the industry: How can we trust degradation models, lifetime predictions, and operational strategies if they have never been validated against genuine grid behavior?
Few research institutions have access to the infrastructure needed to answer that question. The University of Sheffield is one of them.
Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. The University of Sheffield
Sheffield’s unique facility
The Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. This environment enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions. As Professor Gladwin notes, “we aim to bridge the gap between controlled laboratory research and the demands of real grid operation.”
At the heart of the facility is an 11 kV, 4 MW network connection that provides the electrical and operational realism required for advanced diagnostics, fault studies, control algorithm development, techno-economic analysis, and lifetime modeling. Unlike microgrid scale demonstrators or isolated laboratory benches, Sheffield’s environment allows energy storage assets to interact with the same disturbances, market signals, and grid dynamics they would experience in commercial deployment.
“The ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.” —Professor Dan Gladwin, The University of Sheffield
The facility includes:
A 2 MW / 1 MWh lithium titanate system, among the first independent grid-connected BESS of its kind in the UK
A 100 kW second-life EV battery platform, enabling research into reuse, repurposing, and circular-economy models
Support for flywheel systems, supercapacitors, hybrid architectures, and fuel-cell technologies
More than 150 laboratory cell-testing channels, environmental chambers, and impedance spectroscopy equipment
High-speed data acquisition and integrated control systems for parameter estimation, thermal analysis, and fault response measurement
The infrastructure allows Sheffield to operate storage assets directly on the live grid, where they respond to real market signals, deliver contracted power services, and experience genuine frequency deviations, voltage events, and operational disturbances. When controlled experiments are required, the same platform can replay historical grid and market signals, enabling repeatable full power testing under conditions that faithfully reflect commercial operation. This combination provides empirical data of a quality and realism rarely available outside utility-scale deployments, allowing researchers to analyse system behavior at millisecond timescales and gather data at a granularity rarely achievable in conventional laboratory environments.
According to Professor Gladwin, “the ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.”
Dan Gladwin, Professor of Electrical and Control Systems Engineering, leads Sheffield’s research into grid-connected energy storage.The University of Sheffield
Setting the benchmark with grid scale demonstration
One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator, a first-of-a-kind system installed at a time when the UK had no established standards for BESS connection, safety, or control. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of the country’s first independent megawatt scale storage platforms.
The project provided deep insight into how high-power battery chemistries behave under grid stressors. Researchers observed sub-second response times and measured the system’s capability to deliver synthetic inertia-like behavior. As Gladwin reflects, “that project showed us just how fast and capable storage could be when properly integrated into the grid.”
But the demonstrator’s long-term value has been its continued operation. Over nearly a decade of research, it has served as a platform for:
Hybridization studies, including battery-flywheel control architectures
Response time optimization for new grid services
Operator training and market integration, exposing control rooms and traders to a live asset
Algorithm development, including dispatch controllers, forecasting tools, and prognostic and health management systems
Comparative benchmarking, such as evaluation of different lithium-ion chemistries, lead-acid systems, and second-life batteries
A recurring finding is that behavior observed on the live grid often differs significantly from what laboratory tests predict. Subtle electrical, thermal, and balance-of-plant interactions that barely register in controlled experiments can become important at megawatt-scale, especially when systems are exposed to rapid cycling, fluctuating set-points, or tightly coupled control actions. Variations in efficiency, cooling system response, and auxiliary power demand can also amplify these effects under real operating stress. As Professor Gladwin notes, “phenomena that never appear in a lab can dominate behavior at megawatt scale.”
These real-world insights feed directly into improved system design. By understanding how efficiency losses, thermal behavior, auxiliary systems, and control interactions emerge at scale, researchers can refine both the assumptions and architecture of future deployments. This closes the loop between application and design, ensuring that new storage systems can be engineered for the operational conditions they will genuinely encounter rather than idealized laboratory expectations.
Ensuring longevity with advanced diagnostics
Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions.The University of Sheffield
Ensuring the long-term reliability of storage requires understanding how systems age under the conditions they actually face. Sheffield’s research combines high-resolution laboratory testing with empirical data from full-scale grid-connected assets, building a comprehensive approach to diagnostics and prognostics. In Gladwin’s words, “A model is only as good as the data and conditions that shape it. To predict lifetime with confidence, we need laboratory measurements, full-scale testing, and validation under real-world operating conditions working together.”
A major focus is accurate state estimation during highly dynamic operation. Using advanced observers, Kalman filtering, and hybrid physics-ML approaches, the team has developed methods that deliver reliable SOC, SOH and SOP estimates during rapid power swings, irregular cycling, and noisy conditions where traditional methods break down.
Another key contribution is understanding cell-to-cell divergence in large strings. Sheffield’s data shows how imbalance accelerates near SOC extremes, how thermal gradients drive uneven ageing, and how current distribution causes long-term drift. These insights inform balancing strategies that improve usable capacity and safety.
Sheffield has also strengthened lifetime and degradation modeling by incorporating real grid behavior directly into the framework. By analyzing actual market signals, frequency deviations, and dispatch patterns, the team uncovers ageing mechanisms that do not appear during controlled laboratory cycling and would otherwise remain hidden.
These contributions fall into four core areas:
State Estimation and Parameter Identification
Robust SOC/SOH estimation
Online parameter identification for equivalent circuit models
Power capability prediction using transient excitation
Data selection strategies under noise and variability
Degradation and Lifetime Modelling
Degradation models built on real frequency and market data
Analysis of micro cycling and asymmetric duty cycles
Hybrid physics-ML forecasting models
Thermal and Imbalance Behavior
Characterizing thermal gradients in containerized systems
Understanding cell imbalance in large-scale systems
Mitigation strategies at the cell and module level
Coupled thermal-electrical behavior under fast cycling
Hybrid Systems and Multi-Technology Optimization
Battery-flywheel coordination strategies
Techno-economic modeling for hybrid assets
Dispatch optimization using evolutionary algorithms
Control schemes that extend lifetime and enhance service performance
Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. These batteries face deep cycling, variable user behavior, and sustained high temperatures, all without active cooling or controlled environments. The team’s techniques in cell characterization, parameter estimation, and in-situ health tracking have helped extend the usable life of MOPO’s battery packs. “By applying our know-how, we can make these battery-swap packs clean, safe, and significantly more affordable than petrol and diesel generators for the communities that rely on them,” says Professor Gladwin.
Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. MOPO
Collaboration and the global future
A defining strength of Sheffield’s approach is its close integration with industry, system operators, technology developers, and service providers. Over the past decade, its grid-connected testbed has enabled organizations to trial control algorithms, commission their first battery assets, test market participation strategies, and validate performance under real operational constraints.
These partnerships have produced practical engineering outcomes, including improved dispatch strategies, refined control architectures, validated installation and commissioning methods, and a clearer understanding of degradation under real-world market operation. According to Gladwin, “It is a two-way relationship, we bring the analytical and research tools, industry brings the operational context and scale.”
One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of UK’s first independent megawatt scale storage platforms.The University of Sheffield
This two-way exchange, combining academic insight with operational experience, ensures that Sheffield’s research remains directly relevant to modern power systems. It continues to shape best practice in lifetime modelling, hybrid system control, diagnostics, and operational optimization.
As electricity systems worldwide move toward net zero, the need for validated models, proven control algorithms, and empirical understanding will only grow. Sheffield’s combination of full-scale infrastructure, long-term datasets, and collaborative research culture ensures it will remain at the forefront of developing storage technologies that perform reliably in the environments that matter most, the real world.
The power surging through transmission lines over the iconic stone walls of England’s northern countryside is pushing the United Kingdom’s grid to its limits. To the north, Scottish wind farms have doubled their output over the past decade. In the south, where electricity demand is heaviest, electrification and new data centers promise to draw more power, but new generation is falling short. Construction on a new 3,280-megawatt nuclear power plant west of London lags years behind schedule.
The result is a lopsided flow of power that’s maxing out transmission corridors from the Highlands to London. That grid strain won’t ease any time soon. New lines linking Scotland to southern England are at least three to four years from operation, and at risk of further delays from fierce local opposition.
At the same time, U.K. Prime Minister Keir Starmer is bent on installing even more wind power and slashing fossil-fuel generation by 2030. His Labour government says low-carbon power is cheaper and more secure than natural gas, much of which comes from Norway via the world’s longest underwater gas pipeline and is vulnerable to disruption and sabotage.
The lack of transmission lines available to move power flowing south from Scottish wind farms has caused grid congestion in England. To better manage it, the U.K. has installed SmartValves at three substations in northern England—Penwortham, Harker, and Saltholme—and is constructing a fourth at South Shields. Chris Philpot
The U.K.’s resulting grid congestion prevents transmission operators from delivering some of their cleanest, cheapest generation to all of the consumers who want it. Congestion is a perennial problem whenever power consumption is on the rise. It pushes circuits to their thermal limits and creates grid stability or security constraints.
With congestion relief needed now, the U.K.’s grid operators are getting creative, rapidly tapping new cable designs and innovations in power electronics to squeeze more power through existing transmission corridors. These grid-enhancing technologies, or GETs, present a low-cost way to bridge the gap until new lines can be built.
“GETs allow us to operate the system harder before an investment arrives, and they save a s***load of money,” says Julian Leslie, chief engineer and director of strategic energy planning at the National Energy System Operator (NESO), the Warwick-based agency that directs U.K. energy markets and infrastructure.
Transmission lines running across England’s countryside are maxed out, creating bottlenecks in the grid that prevent some carbon-free power from reaching customers. Vincent Lowe/Alamy
The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. Such innovation involves some risk, because an intervention anywhere on the U.K.’s tightly meshed power system can have system-wide impacts. (Grid operators elsewhere are choosing to start with GETs at their systems’ periphery—where there’s less impact if something goes wrong.)
The question is how far—and how fast—the U.K.’s grid operators can push GETs capabilities. The new technologies still have a limited track record, so operators are cautiously feeling their way toward heavier investment. Power system experts also have unanswered questions about these advanced grid capabilities. For example, will they create more complexity than grid operators can manage in real time? Might feedback between different devices destabilize the grid?
There is no consensus yet as to how to even screen for such risks, let alone protect against them, says Robin Preece, professor in future power systems at the University of Manchester, in England. “We’re at the start of establishing that now, but we’re building at the same time. So it’s kind of this race between the necessity to get this technology installed as quickly as possible, and our ability to fully understand what’s happening.”
How is the U.K. Managing Grid Congestion?
One of the most innovative and high-stakes tricks in the U.K.’s toolbox employs electronic power-flow controllers, devices that shift electricity from jammed circuits to those with spare capacity. These devices have been able to finesse enough additional wind power through grid bottlenecks to replace an entire gas-fired generator. Installed in northern England four years ago by Smart Wires, based in Durham, N.C., these SmartValves are expected to help even more as NESO installs more of them and masters their capabilities.
Warwick-based National Grid Electricity Transmission, the grid operator for England and Wales, is adding SmartValves and also replacing several thousand kilometers of overhead wire with advanced conductors that can carry more current. And it’s using a technique called dynamic line rating, whereby sensors and models work together to predict when weather conditions will allow lines to carry extra current.
Other kinds of GETs are also being used globally. Advanced conductors are the most widely deployed. Dynamic line rating is increasingly common in European countries, and U.S. utilities are beginning to take it seriously. Europe also leads the world in topology-optimization software, which reconfigures power routes to alleviate congestion, and advanced power-flow-control devices like SmartValves.
Engineers install dynamic line rating technology from the Boston-based company LineVision on National Grid’s transmission network. National Grid Electricity Transmission
SmartValves’ chops stand out at the Penwortham substation in Lancashire, England, one of two National Grid sites where the device made its U.K. debut in 2021. Penwortham substation is a major transmission hub, whose spokes desperately need congestion relief. Auditory evidence of heavy power flows was clear during my visit to the substation, which buzzes loudly. The sound is due to the electromechanical stresses on the substation’s massive transformers, explains my guide, National Grid commissioned engineer Paul Lloyd.
Penwortham’s transformers, circuits, and protective relays are spread over 15 hectares, sandwiched between pastureland and suburban homes near Preston, a small city north of Manchester. Power arrives from the north on two pairs of 400-kilovolt AC lines, and most of it exits southward via 400-kV and 275-kV double-circuit wires.
Transmission lines lead to the congested Penwortham substation, which has become a test-bed for GETs such as SmartValves and dynamic line rating. Peter Fairley
What makes the substation a strategic test-bed for GETs is its position just north of the U.K. grid’s biggest bottleneck, known as Boundary B7a, which runs east to west across the island. Nine circuits traverse the B7a: the four AC lines headed south from Penwortham, four AC lines closer to Yorkshire’s North Sea coast, and a high-voltage direct-current (HVDC) link offshore. In theory, those circuits can collectively carry 13.6 gigawatts across the B7a. But NESO caps its flow at several gigawatts lower to ensure that no circuits overload if any two lines turn off.
Such limits are necessary for grid reliability, but they are leaving terawatt-hours of wind power stranded in Scotland and increasing consumers’ energy costs: an extra £196 million (US $265 million) in 2024 alone. The costs stem from NESO having to ramp up gas-fired generators to meet demand down south while simultaneously compensating wind-farm operators for curtailing their output, as required under U.K. policy.
So National Grid keeps tweaking Penwortham. In 2011 the substation got its first big GET: phase-shifting transformers (PSTs), a type of analog flow controller. PSTs adjust power flow by creating an AC waveform whose alternating voltage leads or lags its alternating current. They do so by each PST using a pair of connected transformers to selectively combine power from an AC transmission circuit’s three phases. Motors reposition electrical connections on the transformer coils to adjust flows.
Phase-shifting transformers (PSTs) were installed in 2012 at the Penwortham substation and are the analog predecessor to SmartValves. They’re powerful but also bulky and relatively inflexible. It can take 10 minutes or more for the PST’s motorized actuators at Penwortham to tap their full range of flow control, whereas SmartValves can shift within milliseconds.National Grid Electricity Transmission
Penwortham’s pair of 540-tonne PSTs occupy the entire south end of the substation, along with their dedicated chillers, relays, and power supplies. Delivering all that hardware required extensive road closures and floating a huge barge up the adjacent River Ribble, an event that made national news.
The SmartValves at Penwortham stand in stark contrast to the PSTs’ heft, complexity, and mechanics. SmartValves are a type of static synchronous series compensator, or SSSC—a solid-state alternative to PSTs that employs power electronics to tweak power flows in milliseconds. I saw two sets of them tucked into a corner of the substation, occupying a quarter of the area of the PSTs.
The SmartValve V103 design [above] experienced some teething and reliability issues that were ironed out with the technology’s next iteration, the V104. National Grid Electricity Transmission/Smart Wires
The SmartValves are first and foremost an insurance policy to guard against a potentially crippling event: the sudden loss of one of the B7a’s 400-kV lines. If that were to happen, gigawatts of power would instantly seek another route over neighboring lines. And if it happened on a windy day, when lots of power is streaming in from the north, the resulting surge could overload the 275-kV circuits headed from Penwortham to Liverpool. The SmartValves’ job is to save the day.
They do this by adding impedance to the 275-kV lines, thus acting to divert more power to the remaining 400-kV lines. This rerouting of power prevents a blackout that could potentially cascade through the grid. The upside to that protection is that NESO can safely schedule an additional 350 MW over the B7a.
The savings add up. “That’s 350 MW of wind you’re no longer curtailing from wind farms. So that’s 350 times £100 a megawatt-hour,” says Leslie, at NESO. “That’s also 350 MW of gas-fired power that you don’t need to replace the wind. So that’s 350 times £120 a megawatt-hour. The numbers get big quickly.”
Mark Osborne, the National Grid lead asset life-cycle engineer managing its SmartValve projects, estimates the devices are saving U.K. customers over £100 million (US $132 million) a year. At that rate, they’ll pay for themselves “within a few years,” Osborne says. By utility standards, where investments are normally amortized over decades, that’s “almost immediately,” he adds.
How Do Grid-Enhancing Technologies Work?
The way Smart Wires’ SSSC devices adjust power flow is based on emulating impedance, which is a strange beast created by AC power. An AC flow’s changing magnetic field induces an additional voltage in the line’s conductor, which then acts as a drag on the initial field. Smart Wires’ SSSC devices alter power flow by emulating that natural process, effectively adding or subtracting impedance by adding their own voltage wave to the line. Adding a wave that leads the original voltage wave will boost flow, while adding a lagging wave will reduce flow.
The SSSC’s submodules of capacitors and high-speed insulated-gate bipolar transistors operate in sequence to absorb power from a line and synthesize its novel impedance-altering waves. And thanks to its digital controls and switches, the device can within milliseconds flip from maximum power push to maximum pull.
You can trace the development of SSSCs to the advent of HVDC transmission in the 1950s. HVDC converters take power from an AC grid and efficiently convert it and transfer it over a DC line to another point in the same grid, or to a neighboring AC grid. In 1985, Narain Hingorani, an HVDC expert at the Palo Alto–based Electric Power Research Institute, showed that similar converters could modulate the flow of an AC line. Four years later, Westinghouse engineer Laszlo Gyugyi proposed SSSCs, which became the basis for Smart Wires’ boxes.
Major power-equipment manufacturers tried to commercialize SSSCs in the early 2000s. But utilities had little need for flow control back then because they had plenty of conventional power plants that could meet local demand when transmission lines were full.
The picture changed as solar and wind generation exploded and conventional plants began shutting down. In years past, grid operators addressed grid congestion by turning power plants on or off in strategic locations. But as of 2024, the U.K. had shut down all of its coal-fired power plants—save one, which now burns wood—and it has vowed to slash gas-fired generation from about a quarter of electricity supply in 2024 to at most 5 percent in 2030.
The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment.
To seize the emerging market opportunity presented by changing grid operations, Smart Wires had to make a crucial technology upgrade: ditching transformers. The company’s first SSSC, and those from other suppliers, relied on a transformer to absorb lightning, voltage surges, and every other grid assault that could fry their power electronics. This made them bulky and added cost. So Smart Wires engineers set to work in 2017 to see if they could live without the transformer, says Frank Kreikebaum, Smart Wires’s interim chief of engineering. Two years later the company had assembled a transformerless electronic shield. It consisted of a suite of filters and diverters, along with a control system to activate them. Ditching the transformer produced a trim, standardized product—a modular system-in-a-box.
SmartValves work at any voltage and are generally ganged together to achieve a desired level of flow control. They can be delivered fast, and they fit in the kinds of tight spaces that are common in substations. “It’s not about cost, even though we’re competitive there. It’s about ‘how quick’ and ‘can it fit,’” says Kreikebaum.
And if the grid’s pinch point shifts? The devices can be quickly moved to another substation. “It’s a Lego-brick build,” says Owen Wilkes, National Grid’s director of network design. Wilkes’s team decides where to add equipment based on today’s best projections, but he appreciates the flexibility to respond to unexpected changes.
National Grid’s deployments in 2021 were the highest-voltage installation of SSSCs at the time, and success there is fueling expansion. National Grid now has packs of SmartValves installed at three substations in northern England and under construction at another, with five more installations planned in that area. Smart Wires has also commissioned commercial projects at transmission substations in Australia, Brazil, Colombia, and the United States.
Dynamic Line Rating Boosts Grid Efficiency
In addition to SSSCs, National Grid has deployed lidar that senses sag on Penwortham’s 275-kV lines—an indication that they’re starting to overheat. The sensors are part of a dynamic line rating system and help grid operators maximize the amount of current that high-voltage lines can carry based on near-real-time weather conditions. (Cooler weather means more capacity.) Now the same technology is being deployed across the B7a—a £1 million investment that is projected to save consumers £33 million annually, says Corin Ireland, a National Grid optimization engineer with the task of seizing GETs opportunities.
There’s also a lot of old conductor wires being swapped out for those that can carry more power. National Grid’s business plan calls for 2,416 kilometers of such reconductoring over the coming five years, which is about 20 percent of its system. Scotland’s transmission operators are busy with their own big swaps.
Scottish wind farms have doubled their power output over the past decade, but it often gets stranded due to grid congestion in England. Andreas Berthold/Alamy
But while National Grid and NESO are making some of the boldest deployments of GETs in the world, they’re not fully tapping the technologies’ capabilities. That’s partly due to the conservative nature of power utilities, and partly because grid operators already have plenty to keep their eyes on. It also stems from the unknowns that still surround GETs, like whether they might take the grid in unforeseen directions if allowed to respond automatically, or get stuck in a feedback loop responding to each other. Imagine SmartValve controllers at different substations fighting, with one substation jumping to remove impedance that the other just added, causing fluctuating power flows.
“These technologies operate very quickly, but the computers in the control room are still very reliant on people making decisions,” says Ireland. “So there are time scales that we have to take into consideration when planning and operating the network.”
This kind of conservative dispatching leaves value on the table. For example, the dynamic line rating models can spit out new line ratings every 15 minutes, but grid operators get updates only every 24 hours. Fewer updates means fewer opportunities to tap the system’s ability to boost capacity. Similarly, for SmartValves, NESO activates installations at only one substation at a time. And control-room operators turn them on manually, even though the devices could automatically respond to faults within milliseconds.
National Grid is upgrading transmission lines dating as far back as the 1960s. This includes installing conductors that retain their strength at higher temperatures, allowing them to carry more power. National Grid Electricity Transmission
Modeling by Smart Wires and National Grid shows a significant capacity boost across Boundary B7a if Penwortham’s SmartValves were to work in tandem with another set further up the line. For example, when Penwortham is adding impedance to push megawatts off the 275-kV lines, a set closer to Scotland could simultaneously pull the power north, nudging the sum over to the B7a’s eastern circuits. Simulations by Andy Hiorns, a former National Grid planning director who consults for Smart Wires, suggest that this kind of cooperative action should increase the B7a circuits’ usable capacity by another 250 to 300 MW. “You double the effectiveness by using them as pairs,” he says.
Operating multiple flow controllers may become necessary for unlocking the next boundary en route to London, south of the B7a, called Boundary B8. As dynamic line rating, beefier conductors, and SmartValves send more power across the B7a, lines traversing B8 are reaching their limits. Eventually, every boundary along the route will have to be upgraded.
Meanwhile, back at its U.S. headquarters, Smart Wires is developing other applications for its SSSCs, such as filtering out power oscillations that can destabilize grids and reduce allowable transfers. That capability could be unlocked remotely with firmware.
The company is also working on a test program that could turn on pairs of SmartValve installations during slack moments when there isn’t much going on in the control rooms. That would give National Grid and NESO operators an opportunity to observe the impacts, and to get more comfortable with the technology.
National Grid and Smart Wires are also hard at work developing industry-first optimization software for coordinating flow-control devices. “It’s possible to extend the technology from how we’re using it today,” says Ireland at National Grid. “That’s the exciting bit.”
NESO’s Julian Leslie shares that excitement and says he expects SmartValves to begin working together to ease power through the grid—once the operators have the modeling right and get a little more comfortable with the technology. “It’s a great innovation that has the potential to be really transformational,” he says. “We’re just not quite there yet.”
This article appears in the February 2026 print issue as “The Low-Cost Electronics Unclogging the U.K.’s Grid.”
German utility RWE implemented the first known virtual power plant (VPP) in 2008, aggregating nine small hydroelectric plants for a total capacity of 8.6 megawatts. In general, a VPP pulls together many small components—like rooftop solar, home batteries, and smart thermostats—into a single coordinated power system. The system responds to grid needs on demand, whether by making stored energy available or reducing energy consumption by smart devices during peak hours.
VPPs had a moment in the mid-2010s, but market conditions and the technology weren’t quite aligned for them to take off. Electricity demand wasn’t high enough, and existing sources—coal, natural gas, nuclear, and renewables—met demand and kept prices stable. Additionally, despite the costs of hardware like solar panels and batteries falling, the software to link and manage these resources lagged behind, and there wasn’t much financial incentive for it to catch up.
But times have changed, and less than a decade later, the stars are aligning in VPPs’ favor. They’re hitting a deployment inflection point, and they could play a significant role in meeting energy demand over the next 5 to 10 years in a way that’s faster, cheaper, and greener than other solutions.
U.S. Electricity Demand Is Growing
Electricity demand in the United States is expected to grow 25 percent by 2030 due to data center buildouts, electric vehicles, manufacturing, and electrification, according to estimates from technology consultant ICF International.
At the same time, a host of bottlenecks are making it hard to expand the grid. There’s a backlog of at least three to five years on new gas turbines. Hundreds of gigawatts of renewables are languishing in interconnection queues, where there’s also a backlog of up to five years. On the delivery side, there’s a transformer shortage that could take up to five years to resolve, and a dearth of transmission lines. This all adds up to a long, slow process to add generation and delivery capacity, and it’s not getting faster anytime soon.
“Fueling electric vehicles, electric heat, and data centers solely from traditional approaches would increase rates that are already too high,” says Brad Heavner, the executive director of the California Solar & Storage Association.
Enter the vast network of resources that are already active and grid-connected—and the perfect storm of factors that make now the time to scale them. Adel Nasiri, a professor of electrical engineering at the University of South Carolina, says variability of loads from data centers and electric vehicles has increased, as has deployment of grid-scale batteries and storage. There are more distributed energy resources available than there were before, and the last decade has seen advances in grid management using autonomous controls.
At the heart of it all, though, is the technology that stores and dispatches electricity on demand: batteries.
Advances in Battery Technology
Over the past 10 years, battery prices have plummeted: The average lithium-ion battery pack price fell from US $715 per kilowatt-hour in 2014 to $115 per kWh in 2024. Their energy density has simultaneously increased thanks to a combination of materials advancements, design optimization of battery cells, and improvements in the packaging of battery systems, says Oliver Gross, a senior fellow in energy storage and electrification at automaker Stellantis.
The biggest improvements have come in batteries’ cathodes and electrolytes, with nickel-based cathodes starting to be used about a decade ago. “In many ways, the cathode limits the capacity of the battery, so by unlocking higher-capacity cathode materials, we have been able to take advantage of the intrinsic higher capacity of anode materials,” says Greg Less, the director of the University of Michigan’s Battery Lab.
Increasing the percentage of nickel in the cathode (relative to other metals) increases energy density because nickel can hold more lithium per gram than materials like cobalt or manganese, exchanging more electrons and participating more fully in the redox reactions that move lithium in and out of the battery. The same goes for silicon, which has become more common in anodes. However, there’s a trade-off: These materials cause more structural instability during the battery’s cycling.
The anode and cathode are surrounded by a liquid electrolyte. The electrolyte has to be electrically and chemically stable when exposed to the anode and cathode in order to avoid safety hazards like thermal runaway or fires and rapid degradation. “The real revolution has been the breakthroughs in chemistry to make the electrolyte stable against more reactive cathode materials to get the energy density up,” says Gross. Chemical compound additives—many of them based on sulfur and boron chemistry—for the electrolyte help create stable layers between it and the anode and cathode materials. “They form these protective layers very early in the manufacturing process so that the cell stays stable throughout its life.”
These advances have primarily been made on electric vehicle batteries, which differ from grid-scale batteries in that EVs are often parked or idle, while grid batteries are constantly connected and need to be ready to transfer energy. However, Gross says, “the same approaches that got our energy density higher in EVs can also be applied to optimizing grid storage. The materials might be a little different, but the methodologies are the same.” The most popular cathode material for grid storage batteries at the moment is lithium iron phosphate, or LFP.
Thanks to these technical gains and dropping costs, a domino effect has been set in motion: The more batteries deployed, the cheaper they become, which fuels more deployment and creates positive feedback loops.
Regions that have experienced frequent blackouts—like parts of Texas, California, and Puerto Rico—are a prime market for home batteries. Texas-based Base Power, which raised $1 billion in Series C funding in October, installs batteries at customers’ homes and becomes their retail power provider, charging the batteries when excess wind or solar production makes prices cheap, and then selling that energy back to the grid when demand spikes.
There is, however, still room for improvement. For wider adoption, says Nasiri, “the installed battery cost needs to get under $100 per kWh for large VPP deployments.”
Improvements in VPP Software
The software infrastructure that once limited VPPs to pilot projects has matured into a robust digital backbone, making it feasible to operate VPPs at grid scale. Advances in AI are key: Many VPPs now use machine-learning algorithms to predict load flexibility, solar and battery output, customer behavior, and grid stress events. This improves the dependability of a VPP’s capacity, which was historically a major concern for grid operators.
While solar panels have advanced, VPPs have been held back by a lack of similar advancement in the needed software until recently.Sunrun
Cybersecurity and interoperability standards are still evolving. Interconnection processes and data visibility in many areas aren’t consistent, making it hard to monitor and coordinate distributed resources effectively. In short, while the technology and economics for VPPs are firmly in place, there’s work yet to be done aligning regulation, infrastructure, and market design.
On top of technical and cost constraints, VPPs have long been held back by regulations that prevented them from participating in energy markets like traditional generators. SolarEdge recently announced enrollment of more than 500 megawatt-hours of residential battery storage in its VPP programs. Tamara Sinensky, the company’s senior manager of grid services, says the biggest hurdle to achieving this milestone wasn’t technical—it was regulatory program design.
California’s Demand Side Grid Support (DSGS) program, launched in mid-2022, pays homes, businesses, and VPPs to reduce electricity use or discharge energy during grid emergencies. “We’ve seen a massive increase in our VPP enrollments primarily driven by the DSGS program,” says Sinensky. Similarly, Sunrun’s Northern California VPP delivered 535 megawatts of power from home-based batteries to the grid in July, and saw a 400 percent increase in VPP participation from last year.
FERC Order 2222, issued in 2020, requires regional grid operators to allow VPPs to sell power, reduce load, or provide grid services directly to wholesale market operators, and get paid the same market price as a traditional power plant for those services. However, many states and grid regions don’t yet have a process in place to comply with the FERC order. And because utilities profit from grid expansion and not VPP deployment, they’re not incentivized to integrate VPPs into their operations. Utilities “view customer batteries as competition,” says Heavner.
According to Nasiri, VPPs would have a meaningful impact on the grid if they achieve a penetration of 2 percent of the market’s peak power. “Larger penetration of up to 5 percent for up to 4 hours is required to have a meaningful capacity impact for grid planning and operation,” he says.
In other words, VPP operators have their work cut out for them in continuing to unlock the flexible capacity in homes, businesses, and EVs. Additional technical and policy advances could move VPPs from a niche reliability tool to a key power source and grid stabilizer for the energy tumult ahead.
Instead of absorbing energy from the sun to produce electricity, a new class of devices generates power by absorbing heat from its surroundings and beaming it at outer space. Such devices, which do not require exotic materials as their predecessors did, could help ventilate greenhouses and homes, researchers say.
In 2014, scientists invented superthin materials that can cool buildings without using electricity by beaming heat into outer space. When these materials absorb warmth, their compositions and structures ensure they emit heat outward as very specific wavelengths of infrared radiation, ones that air does not absorb. Instead, the radiation is free to leave the atmosphere, carrying energy with it and cooling the area around the material in a process called radiative cooling. The materials could help reduce demand for electricity. Air-conditioning accounts for nearly 15 percent of the electricity consumed by buildings in the United States alone.
Researchers then began exploring whether they could harness radiative cooling to generate power. Whereas solar cells produce electricity from the flow of energy into them from the sun, thermoradiative devices could generate power from energy flowing out from them into space.
“Thermoradiative devices operate like solar cells in reverse,” says Jeremy Munday, professor of electrical and computer engineering at the University of California, Davis. “Rather than pointing them at a hot object like the sun, you point them at a cool object, like the sky.”
However, these devices were typically semiconductor electronics that needed rare or expensive materials to operate efficiently. In a new study, Munday and his colleagues investigated using Stirling engines, which “are mechanically simple and do not rely on exotic materials,” he says. “They also directly produce mechanical power—which is valuable for applications like air movement or water pumping—without needing intermediate electrical conversion.”
A Stirling engine meets a heat-emitting antenna
At the heart of a Stirling engine is a gas sealed in an airtight chamber. When the gas is heated, it expands, and pressure increases within the chamber; when it is cooled, it contracts, reducing pressure. This creates a cycle of expansion and contraction that drives a piston, generating power.
Whereas internal combustion engines rely on large differences in temperature to generate power, a Stirling engine is very efficient when it comes to small differences in temperature.
“Stirling engines have been around since the early 1800s, but they always operated by touching some warm object and rejecting waste heat into the local, ambient environment,” Munday says. Instead, the new device is heated by its surroundings and cooled when it radiates energy into space.
The new device combines a Stirling engine with a panel that acts as a heat-radiating antenna. The researchers placed it on the ground outdoors at night.
A year of nighttime experiments revealed that the device could generate more than 10 degrees Celsius of cooling most months, which the researchers could convert to produce more than 400 milliwatts of mechanical power per square meter. The scientists used their invention to directly power a fan and also coupled it to a small electrical motor to generate current.
Jeremy Munday’s experimental engine resembles a mechanical pinwheel and is mounted on a metal sheet.Jeremy Munday
Since the source of the new device’s energy is Earth’s ambient heat instead of the sun, its power output “is much lower than solar photovoltaics—roughly two orders of magnitude lower,” Munday says. “However, the goal is not to replace solar. Instead, this enables useful work when solar power is unavailable, such as at night and without requiring batteries, wiring, or fuel.”
The researchers calculated the device could generate more than 5 cubic feet per minute of air flow, the minimum air rate the American Society of Heating, Refrigerating and Air-Conditioning Engineers requires to minimize detrimental effects on health inside public buildings. Potential applications may include circulating carbon dioxide within greenhouses and improving comfort inside residential buildings, they say.
Munday and his colleagues note there are many ways in which they could further improve the device’s performance. For instance, they could replace the air sealed in the device with hydrogen or helium gas, which would reduce internal engine friction. “With more-efficient engine designs, we think this approach could enable a new class of passive, around-the-clock power systems that complement solar energy and help support resilient, off-grid infrastructure,” Munday says.
In the future, “we would like to set up these devices in a real greenhouse as a first proof-of-concept application,” Munday says. They would also like to engineer the device to work during the day, he notes.
The scientists detailed their findingsin the journal Science Advances.
This article appears in the February 2026 print issue as “Engine Generates Power by Beaming Heat into Space.”