Reading view

Data Centers Are Transitioning From AC to DC



Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catch-up. The power-delivery community is responding: Announcements from Delta, Eaton, Schneider Electric, and Vertiv showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.

AC-to-DC Conversion Challenges

Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1 to 35 kilovolts), is stepped down to low-voltage AC (480 or 415 volts) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.

“The double conversion process ensures the output AC is clean, stable, and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.

That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 megawatt. At that scale, the energy losses, current levels, and copper requirements of AC-to-DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1-MW rack could require as much as 200 kilograms of copper busbar. For a 1-gigawatt data center, it could amount to 200,000 kg of copper.

Benefits of High-Voltage DC Power

By converting 13.8-kV AC grid power directly to 800 V DC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power-supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.

“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Bacellar.

Switching from 415-V AC to 800-V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for gigawatt-scale facilities.

“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800-V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-to-DC converters step that voltage down for GPUs and CPUs.”

A report from technology advisory group Omdia claims that higher voltage DC data centers have already appeared in China. In the Americas, the Mt. Diablo Initiative (a collaboration among Meta, Microsoft, and the Open Compute Project) is a 400-V DC rack power distribution experiment.

Innovations in DC Power Systems

A handful of vendors are trying to get ahead of the game. Vertiv’s 800-V DC ecosystem that integrates with Nvidia Vera Rubin Ultra Kyber platforms will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800-V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800-V DC in-row 660-kW power racks with a total of 480 kW of embedded battery backup units. And, SolarEdge is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.

But much of the industry is far behind. Patrick Hughes, senior vice president of strategy, technical, and industry affairs for the National Electrical Manufacturers Association, says most innovation is happening at the 400-V DC level, though some are preparing 800-V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.

“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”

  •  

AI Data Centers Turn to High-Temperature Superconductors



Data centers for AI are turning the world of power generation on its head. There isn’t enough power capacity on the grid to even come close to how much energy is needed for the number being built. And traditional transmission and distribution networks aren’t efficient enough to take full advantage of all the power available. According to the U.S. Energy Information Administration (EIA), annual transmission and distribution losses average about 5 percent. The rate is much higher in some other parts of the world. Hence, hyperscalers such as Amazon Web Services, Google Cloud and Microsoft Azure are investigating every avenue to gain more power and raise efficiency.

Microsoft, for example, is extolling the potential virtues of high-temperature superconductors (HTS) as a replacement for copper wiring. According to the company, HTS can improve energy efficiency by reducing transmission losses, increasing the resiliency of electrical grids, and limiting the impact of data centers on communities by reducing the amount of space required to move power.

“Because superconductors take up less space to move large amounts of power, they could help us build cleaner, more compact systems,” Alastair Speirs, the general manager of global infrastructure at Microsoft wrote in a blog post.

Superconductors Revolutionize Power Efficiency

Copper is a good conductor, but current encounters resistance as it moves along the line. This generates heat, lowers efficiency, and restricts how much current can be moved. HTS largely eliminates this resistance factor, as it’s made of superconducting materials that are cooled to cryogenic temperatures. (Despite the name, high-temperature superconductors still rely on frigid temperatures—albeit significantly warmer than those required by traditional superconductors.)

The resulting cables are smaller and lighter than copper wiring, don’t lower voltage as they transmit current, and don’t produce heat. This fits nicely into the needs of AI data centers that are trying to cram massive electrical loads into a tiny footprint. Fewer substations would also be needed. According to Speirs, next-gen superconducting transmission lines deliver capacity that is an order of magnitude higher than conventional lines at the same voltage level.

Microsoft is working with partners on the advancement of this technology including being a part of a US $75 million Series B funding round into Veir, a superconducting power technology developer. Veir’s conductors use HTS tape, most commonly based on a class of materials known as rare-earth barium copper oxide (REBCO). REBCO is a ceramic superconducting layer deposited as a thin film on a metal substrate, then engineered into a rugged conductor that can be assembled into power cables.

“The key distinction from copper or aluminum is that, at operating temperature, the superconducting layer carries current with almost no electrical resistance, enabling very high current density in a much more compact form factor,” says Tim Heidel, Veir’s CEO and cofounder.

Liquid Nitrogen Cooling in Data Centers

A man poses in front of a server rack next to a large display showing graphs. Ruslan Nagimov, the principal infrastructure engineer for cloud operations and innovation at Microsoft, stands near the world’s first HTS-powered rack prototype.Microsoft

HTS cables still operate at cryogenic temperatures, so cooling must be integrated into the power-delivery system design. Veir maintains a low operating temperature using a closed-loop liquid-nitrogen system: The nitrogen circulates through the length of the cable, exits at the far end, is recooled, and then recirculated back to the start.

“Liquid nitrogen is a plentiful, low cost, safe material used in numerous critical commercial and industrial applications at enormous scale,” says Heidel. “We are leveraging the experience and standards for working with liquid nitrogen proven in other industries to design stable, data center solutions designed for continuous operation, with monitoring and controls that fit critical infrastructure expectations rather than lab conditions.”

HTS cable cooling can be done either within the data center or externally. Heidel favors the latter as that minimizes footprint and operational complexity indoors. Liquid nitrogen lines are fed into the facility to serve the superconductors. They deliver power to where it’s needed and the cooling system is managed like other facility subsystems.

Rare earth materials, cooling loops, cryogenic temperatures—all of this adds considerably to costs. Thus, HTS isn’t going to replace copper in the vast majority of applications. Heidel says the economics are most compelling where power delivery is constrained by space, weight, voltage drop, and heat.

“In those cases, the value shows up at the system level: smaller footprints, reduced resistive losses, and more flexibility in how you route power,” says Heidel. “As the technology scales, costs should improve through higher-volume HTS tape manufacturing and better yields, and also through standardization of the surrounding system hardware, installation practices, and operating playbooks that reduce design complexity and deployment risk.”

AI data centers are becoming the perfect proving ground for this approach. Hyperscalers are willing to spend to develop higher-efficiency systems. They can balance spending on development against the revenue they might make by delivering AI services broadly.

“HTS manufacturing has matured—particularly on the tape side—which improves cost and supply availability,” says Husam Alissa, Microsoft’s director of systems technology. “Our focus currently is on validating and derisking this technology with our partners with focus on systems design and integration.”

This story was updated on 26 February, 2026 to correct details of Microsoft’s investment into Veir.

  •  

Data Centers Look to Old Airplane Engines for Power



Data-center developers are running into a severe power bottleneck as they rush to build bigger facilities to capitalize on generative AI’s potential. Normally, they would power these centers by connecting to the grid or building a power plant onsite. However, they face major delays in either securing gas turbines or in obtaining energy from the grid.

At the Data Center World Power show in San Antonio in October, natural-gas power provider ProEnergy revealed an alternative—repurposed aviation engines. According to Landon Tessmer, vice president of commercial operations at ProEnergy, some data centers are using his company’s PE6000 gas turbines to provide the power needed during the data center’s construction and during its first few years of operation. When grid power is available, these machines either revert to a backup role, supplement the grid, or are sold to the local utility.

“We have sold 21 gas turbines for two data-center projects amounting to more than 1 gigawatt,” says Tessmer. “Both projects are expected to provide bridging power for five to seven years, which is when they expect to have grid interconnection and no longer need permanent behind-the-meter generation.”

Bridging Power Gaps With a New Kind of Aeroderivative Turbine

It is a common and long-established practice for gas-turbine original equipment manufacturers (OEMs) like GE Vernova and Siemens Energy to convert a successful aircraft engine for stationary electric-power generation applications. Known as aeroderivative gas turbines, these machines have carved out a niche for themselves because they’re lighter, smaller, and more easily maintained than traditional heavy-frame gas turbines.

“It takes a lot to industrialize an aviation engine and make it generate power,” says Mark Axford, President of Axford Turbine Consultants, a gas-turbine consultant and a valuation expert for used turbines.

For example, GE Vernova’s LM6000 gas turbine was derived from GE’s successful CF6-80C2 turbofan engine which was widely used on commercial jets. The CF6-80C2 was first released in 1985, and the LM6000 appeared on the market five years later. To make it suitable for power generation, it needed an expanded turbine section to convert engine thrust into shaft power, a series of struts and supports to mount it on a concrete deck or steel frame, and new controls. Further modifications typically include the development of fuel nozzles that let the machine run on natural gas rather than aviation fuel, and a combustor that minimizes the emission of nitrogen oxides, a major pollutant.

“There just aren’t enough gas turbines to go around and the problem is probably going to get worse,” says Paul Browning, CEO of Generative Power Solutions, formerly the head of GE Power & Water (now GE Vernova) and Mitsubishi Power. Contact GE Vernova to order an LM6000 today and you might be told the waiting list is anywhere from three to five years. You’d hear the same from Siemens Energy for its SGT-A35 aeroderivative gas turbine. Some large, popular, models have even longer waiting lists.

For contrast, “a PE6000 from ProEnergy can be delivered in 2027,” Tessmer says.

 Landon Tessmer speaking behind a podium in front an audience at the Data Center World Power show. Landon Tessmer, ProEnergy’s vice president of commercial operations, spoke at the Data Center World Power conference in October 2025.Data Center World Power

Converted Turbofan Aircraft Engine Can Provide 48 Megawatts

ProEnergy buys and overhauls used CF6-80C2 engine cores—the central part of the engine where combustion occurs—and matches them with newly manufactured aeroderivative parts made either by ProEnergy or its partners. After assembly and testing, these refurbished engines are ready for a second life in electric-power generation, where they provide 48 megawatts, enough to power a small-to-medium data center (or a town of perhaps 20,000 to 40,000 households). According to Tessmer, approximately 1,000 of these aircraft engines are expected to be retired over the next decade, so there’s no shortage of them. A large data center may have demand that exceeds 100 MW, and some of the latest data centers being designed for AI are more than 1 GW.

An overhaul returns an engine and its components to as-new condition. Each of its thousands of parts are disassembled, cleaned, inspected, and then repaired or replaced as needed. In this way, the engine is renewed for another long cycle of run time. Apart from the engine core, every part inside the PE6000 turbine is manufactured to ProEnergy’s specifications. We can overhaul the high-pressure core of any CF6-80C2 and fabricate all the low-pressure components,” Tessmer adds.

ProEnergy sells two-turbine blocks with the standard configuration. It consists of gas turbines, generators, and a host of other gear, such as systems to cool the air entering the turbine during hot days as a way to boost performance, selective catalytic reduction systems to reduce emissions, and various electrical systems. The company focuses solely on one engine, the CF6-80C2, to streamline and simplify engineering and maintenance.

The PE6000 was originally intended for use by utilities that needed more capacity during peak hours. The data-center boom has turned that expectation on its head—data-center operators want these engines to provide power to the entire facility. They run on natural gas and, when being started, can be up and running in 5 minutes. If one needs maintenance, it can be swapped out with a spare within 72 hours. Emissions levels average 2.5 parts per million for nitrogen oxide, which is well below EPA-regulated levels (generally below 10 to below 25 parts per million, depending on the use case). Since 2020, ProEnergy has fabricated 75 PE6000 packages and now has another 52 being assembled or on order.

Lengthy Grid-Connection Delays Mean More Business

Multiple factors contribute to this popularity. Besides the surge in data centers, there’s often a lengthy wait for transmission lines, which may face local opposition and require permits from multiple municipalities or states. “Aeroderivative gas turbines are gaining ground as a bridging technology that runs behind the meter until the utility is able to supply grid power,” says Tessmer.

Tessmer has seen examples of eight-to-ten-year delays on permitting alone. If connecting to the grid continues to take years, at least in some areas, and if gas turbine manufacturers don’t dramatically boost output, bridging power could become an indispensable enabler of the buildout of AI infrastructure.

  •