Normal view

Received before yesterday

DCR Predicts: Can data centres become ‘good neighbours’ in 2026?

2 February 2026 at 08:18

Gareth Williams, Director, UK, India, Middle East and Africa Data Centres and Technology Leader at Arup, argues that 2026 should be the turning point for designing facilities that stabilise grids, steward water, and deliver visible community benefits.

2026 marks a pivotal opportunity to transform how data centres are seen in the public eye. Much has been done to change perceptions from anonymous ‘black boxes’ into strategic assets. Now we must ensure they are seen as positive partners for local energy, water and communities.

That means designing for reciprocity: centres that not only consume, but also stabilise grids, steward scarce water, create jobs, share heat, and leave biodiversity richer than before. This is what I see in briefs for clients, planners and operators alike: putting community benefit at the heart of developments, not as an afterthought.

Energy: from load to flexible, clean, locally useful power

AI-centric workloads are driving volatile, high-density demand, making efficiency gains harder. This is forcing smarter energy strategies, from chip-level liquid cooling and rack-level heat recovery to intelligent workload management.

We will increasingly see data centres act as energy hubs, with co-located renewables, multi-hour batteries, combined heat and power systems, and grid-service participation (frequency response, demand shifting) from day one. Pilot policies already treat facilities as grid allies, including heat-reuse quotas and flexible-access contracts. Operating models will increasingly shift compute to areas with surplus wind and sun — an approach that could also route non-time-critical training to regions with surplus energy.

Baseload energy supply options will mature unevenly. Some operators are testing power purchase agreements linked to small modular reactors to accelerate capacity. Others will combine hydrogen fuel cells for peak resilience with smart microgrids and local renewables. Regardless, the key is to offer two-way benefits: better uptime for operators and measurable support for national grid stability.

Water: design for scarcity, stewardship and circularity

Cooling demand will keep rising with denser compute. This can shift demand in some cases from air to liquid solutions, but the next step is water stewardship by design: closed-loop systems, immersion cooling where appropriate, and zero-freshwater ambitions in stressed catchments.

The Climate Neutral Data Centre Pact points to a water usage efficiency trajectory from ~1.8 L/kWh to 0.4 L/kWh in water-stressed sites by 2040. This is ambitious, but achievable if we switch to non-potable sources and track upstream and downstream impacts.

Practical levers for 2026 include site-level greywater reuse, recycled/industrial ‘brackish’ water sources, rainwater harvesting with sponge landscapes, and seawater cooling at coastal hubs — where environmental permissions and biodiversity management are designed from the outset. Singapore’s Green Data Centre Roadmap shows how regulation can drive cooling tower efficiency upgrades, blowdown recycling and cycles-of-concentration improvements that cut freshwater withdrawals at scale.

Community engagement: early, transparent, beneficial

Engagement still starts too late on many projects. Flip the sequence: begin with benefits, then shape the scheme around agreed outcomes. Practical packages include renewable partnerships that share surplus power; reuse district heat; build biodiversity corridors and accessible green space; offer fibre upgrades that lift local connectivity; and provide STEM education funding and jobs for technicians and landscapers.

Community-first design de-risks approvals and earns trust. These aren’t gestures; they increase value over the life of the campus. This ‘good neighbour’ lens is the fastest way to retire the ‘black box’ image and demonstrate tangible contributions to people’s lives.

Technology: intelligent management, edge resilience, advanced cooling

AI already plays a crucial role in enhancing operations, and it’s only getting smarter. One example is Digital Realty’s collaboration with Ecolab, which identifies real-time operational inefficiencies in cooling systems and recommends improvements to conserve water.

AI-powered management will become the operating system of next-generation facilities, actively orchestrating workloads, power and cooling to maximise efficiency. Intelligent monitoring will drive automation for predictive maintenance, spotting deteriorating components early and scheduling interventions without disrupting SLAs.

At campus scale, hyperscale modular architecture (standardised power and cooling blocks with repeatable controls) will enable capacity expansion and help manage AI surges. And at rack level, advanced liquid cooling systems (direct-to-chip and rear-door heat exchangers) will integrate with smart controls to maximise performance while minimising power and water use.

Materials: low-carbon, modular, designed for circular recovery

Measuring whole-life carbon is vital to managing the sustainability of buildings and critical infrastructure, including data centres. The materials brief should be explicit: certified low-carbon or recycled steel, geopolymer concrete where feasible, and engineered timber for appropriate architectural elements and shading. Envelope design, daylighting and thoughtful material selection can cut operational and embodied impacts while improving working environments.

2026 will see increasing design for disassembly and recovery: standardised rack aisles, traceable components, and procurement that favours reclaimed metals and remanufactured cooling equipment. We should expect to link digital asset plans with physical asset lifecycle strategies, ensuring that refresh cycles trigger material recovery instead of waste.

Acceleration: scale fast, standardise what matters, customise what counts

Large, out-of-town campuses with repeatable, prefabricated/containerised solutions are the only way to match AI demand responsibly. To make this happen, owners and operators will need to standardise the backbone (power blocks, cooling modules, monitoring stacks), then customise for local energy and water contexts.

Reduced bespoke engineering means faster approvals, lower risk, and clearer community commitments (heat and water reuse, biodiversity) baked into template designs. Energy policies that treat campuses as anchor tenants and reward flexibility services will further cut delivery timelines while raising public value.

Conclusion: a systems brief

This is the year to design data centres as reciprocal systems: energy hubs that stabilise grids and disclose 24/7 clean sourcing; water stewards that minimise freshwater draw and close loops; and neighbours that fund skills, share heat, and leave landscapes better than before.

With multidisciplinary teams and a place-first brief, owners and operators can move from compliance to contribution — engineering facilities that are engines of local resilience and global compute. If we build them this way, the sector will be remembered not for what it consumed, but for what it enabled.

This article is part of our DCR Predicts 2026 series. The series has now offficially concluded, you can catch all the articles at the link below.

DCR Predicts 2026

Rising Silver Prices Push Solar Industry to Rethink Materials and Reduce Dependence – EQ

In Short : Soaring silver prices are creating cost pressures for solar manufacturers, prompting efforts to reduce or replace silver usage in photovoltaic technologies. As silver is a critical input for solar cells, companies are exploring alternative materials, efficiency improvements, and new manufacturing processes to control costs while maintaining performance and supporting large-scale solar deployment.

In Detail : The sharp rise in global silver prices has become a growing concern for the solar industry, as silver is a key raw material used in photovoltaic cell manufacturing. Solar firms are increasingly facing higher production costs, which could impact project economics, equipment pricing, and long-term profitability if material dependency is not addressed.

Silver is primarily used in the conductive paste that forms electrical contacts in solar cells, enabling efficient flow of electricity. Although the amount of silver per cell has reduced over the years through technological improvements, the scale of global solar deployment means overall demand for silver continues to rise significantly.

With silver prices reaching multi-year highs, manufacturers are under pressure to optimize material usage. Rising input costs can reduce margins for module producers and increase capital expenditure for solar developers, particularly in price-sensitive markets where competitive tariffs leave little room for cost escalation.

To manage these risks, solar companies are investing in research and development to reduce silver content in solar cells. Techniques such as thinner conductive lines, improved cell architectures, and more precise manufacturing processes are helping minimize silver usage without compromising electrical efficiency.

Some firms are also exploring alternative materials to partially or fully replace silver. Copper, aluminum, and other conductive metals are being tested as potential substitutes, although challenges remain in terms of durability, efficiency, corrosion resistance, and long-term performance under harsh operating conditions.

Technological innovation is playing a crucial role in this transition. Advanced cell designs such as TOPCon, heterojunction, and back-contact technologies allow more efficient use of conductive materials, enabling manufacturers to achieve higher power output with lower precious metal consumption.

From a strategic perspective, reducing silver dependence is also about long-term supply security. Silver is used across multiple industries, including electronics, electric vehicles, and investment markets, making it vulnerable to supply constraints and speculative price movements that can disrupt solar manufacturing plans.

Policy and market dynamics further influence this shift. As governments push for rapid renewable energy expansion, keeping solar affordable is essential for achieving climate targets. Material cost control becomes a critical factor in maintaining the competitiveness of solar power compared to other energy sources.

Overall, the solar industry’s efforts to cut or replace silver usage reflect a broader trend toward material efficiency and technological resilience. By reducing reliance on expensive and volatile inputs, solar manufacturers can protect project economics, strengthen supply chains, and ensure the continued scalability of solar energy in a rapidly evolving global energy landscape.

Saudi Arabia pivots NEOM ‘gigaproject’ to AI data centre hub

By:DCR
29 January 2026 at 15:47

Saudi Arabia is reportedly preparing to scale back NEOM, its marquee ‘gigaproject’ on the Red Sea, with it instead looking to develop an AI data centre hub instead.

According to unnamed sources cited by a report in the Financial Times, Saudi Arabia will scale back its hugely ambitious NEOM megaproject to create a new livable region in the desert in the northwest of the country, on the Red Sea coast. The project was announced in 2017 by Crown Prince Mohammad Bin Salman and was a cornerstone of his Vision 2030. It was to cover about 26,500 square km, roughly the size of Belgium (see map below).

The image above, from 27 October 2024, shows Sindalah, a luxury island destination and the first physical showcase of NEOM.

image

NEOM was due for completion in 2030 and included plans for a city called The Line – a row of 500m tall skyscrapers stretching for some 200km. However NEOM suffered many delays and cost overruns, as well as criticism for potential environmental damage and being unrealistic, among other things.

In addition, Saudi Arabia is hosting the Expo international trade fair in 2030 and the football World Cup in 2034, both of which involve large scale investment. Work on NEOM was paused in 2025 while the government looked at its options in a year-long review which is scheduled to conclude in this quarter.

According to a report in the Financial Times, focus for the region will be more on industry, such as becoming a hub for data centres. Its location means sea water can be used for cooling and the Crown Prince is keen to make his country a leader in AI infrastructure – a hub for data centres to power AI – to attract inward investment and high profile international partners.

An unnamed source cited by the FT said the location had other advantages too, such as digital infrastructure and its position at the crossroad of three continents (Africa, Asia and Europe), plus almost limitless renewable energy and available land.

It’s not the first time NEOM has been touted as potentially playing host to data centres, with DataVolt committing $5 billion DataVolt to develop a new 1.5 GW net zero AI campus at NEOM’s Oxagon. That was expected to come online in 2028, but it’s unknown if it’ll be impacted by the planned rethink for the NEOM area.

This article originally appeared on Mobile Europe, with additional commentary from Data Centre Review.

Lanarkshire becomes Scotland’s first AI Growth Zone, UK’s fifth

29 January 2026 at 14:40

Lanarkshire has been named the UK’s latest AI Growth Zone, with the UK Government backing a major expansion around DataVita’s data centre site in the area. 

This is the first AI Growth Zone located in Scotland, which has long been positioned as an ideal area to host one – given the abundance of renewable power that is available in the region. The Scottish Government has also been keen to promote the area in hopes of developing it into a leading zero-carbon, cost-competitive green data centre hub. 

The Lanarkshire AI Growth Zone, which is the fifth AIGZ to be announced, is set to be based around DataVita’s campus, with the Scottish data centre firm delivering the site in partnership with AI cloud provider CoreWeave. That’s slightly different from other sites, which have often been positioned around multiple data centre operators, such as the North East Growth Zone, which is being centred around expansions to existing campuses from Cobalt Park Data Centres and the QTS Cambois. 

Despite being centred around the one expanded campus, the UK Government still has big hopes for the site. In fact, it’s hoped that the site will bring more than 3,000 jobs to the area over the coming years, including 50 apprenticeships. Around 800 roles are expected to be higher-paid AI and digital infrastructure jobs, spanning everything from research and software to permanent staff running and maintaining data centres, with the remainder tied to construction and site development.

Alongside job creation, ministers are pointing to £8.2 billion of private investment, plus a community fund worth up to £543 million over the next 15 years, which the Government says will be raised as data centre capacity comes online.

What’s being built as part of the Lanarkshire AI Growth Zone

The Lanarkshire AI Growth Zone may be centred around DataVita and CoreWeave’s partnership, but that doesn’t mean it’s just a single facility. To the contrary, the site is expected to feature 100MW of AI-ready data centre capacity, over 1GW of renewable energy infrastructure connected via private wire, and ‘Innovation Parks’ intended to attract adjacent industries that want proximity to large-scale compute.

That extra power will be key to the deployment of this latest AI Growth Zone, with it seen as a key tenet of gaining the designation, but it should also go some way towards helping reduce public opposition. Another data centre located to the south of Glasgow in Hulford has seen intense local opposition due to its enormous power demands, with residents outraged that the site wouldn’t even need to calculate the environmental impact on the local area. 

DataVita and CoreWeave will be keen to avoid the same backlash – which is why the companies are leaning heavily on a whole host of sustainability claims for its Lanarkshire AI Growth Zone. As well as using renewable energy to help power the site, the two firms also plan to make use of waste heat. 

The current plan is that excess heat from cooling systems could, in time, be redirected to support the nearby University Hospital Monklands, described as Scotland’s first fully digital and net zero hospital – though that element is presented as something to be explored once the site is fully up and running, rather than a guaranteed near-term deliverable.

That would be a huge win for advocates of heat networks, with a recent report suggesting that waste heat from UK data centres could heat 3.5m+ homes – it could also help the site win favour with local residents who are impacted by the plans. 

It’s not the only part of the plan that has been developed in a bid to win over residents. In fact, a community fund – worth up to £543 million over 15 years – will also be set up to support local programmes ranging from skills and training packages through to after-school coding clubs and support for local charities and foodbanks. 

DataVita’s parent company, HFD Group, is also expected to contribute £1 million per year to local charities and community groups, on top of the Growth Zone community funding mechanism.

Industry reaction

Commenting on plans for the first AI Growth Zone in Scotland, the UK’s Technology Secretary Liz Kendall noted, “Today’s announcement is about creating good jobs, backing innovation and making sure the benefits AI will bring can be felt across the community – that’s how the UK government is delivering real change for the people of Scotland.

“From thousands of new jobs and billions in investment through to support for local people and their families, AI Growth Zones are bringing generation-defining opportunities to all corners of the country.”

Danny Quinn, Managing Director of DataVita, added, “Scotland has everything AI needs – the talent, the green energy, and now the infrastructure. But this goes beyond the physical build. We’re creating innovation parks, new energy infrastructure, and attracting inward investment from some of the world’s leading technology companies. 

“This is a real opportunity for North Lanarkshire, and we want to make sure local people share in it. The £543 million community fund means the benefits stay here – good jobs, new skills, and investment that actually reaches the people who live and work in this area.”

Schneider Electric’s Matthew Baynes, VP, Secure Power and Data Centres, Schneider Electric, UK & Ireland, concluded, “In the twelve months since the introduction of the AI Opportunities Action Plan, the UK has seen much progress towards its AI ambitions.

“The new AI Growth Zone (AIGZ) announced today in Lanarkshire demonstrates just how far the country has come in its plans to build a sovereign AI nation, with Scotland becoming a critical new infrastructure hub and joining those in Wales, Oxfordshire, and the Northeast of England.

“Furthermore, the country has now secured more than £31B in investment from some of the world’s largest, leading tech companies, demonstrating that the UK has the people, resources and ambition to make AI a centrepiece of a new and revitalised Industrial Strategy.

“While this can be considered a success in many respects, there is still much work to do. Access to renewable power remains one of the biggest hurdles facing many parts of the country, and as the UK’s energy technology partner for data centres and AI Infrastructure, we believe there is a clear opportunity to catalyse the both the AI and green transitions by turning data centres into the energy centres of the future – fast-tracking new developments with behind-the-meter power generation and microgrids.

“Furthermore, the AIGZ announced today could not be more timely. We believe Scotland, with its cool temperate climate and rich conditions to generate renewable energy, provides a key opportunity to create secure, scalable and sustainable infrastructure capable of galvanising the AI race. Now, the UK’s sustainability and AI ambitions must work together hand-in-glove, demonstrating that today’s technology can be a catalyst for a greener future, powered by AI.”

DCR Predicts: The new bottleneck for AI data centres isn’t technology – it’s permission

29 January 2026 at 08:23

As gigawatt-scale sites move from abstract infrastructure to highly visible ‘AI factories’, Tate Cantrell, Verne CTO, argues that grid capacity, water myths, and local sentiment will decide what actually gets built.

The industry in 2026 will need to get ready for hyper-dense, gigawatt-scale data centres, but preparation will be more complicated than purely infrastructure design. AI’s exploding computational demand is pushing designers to deliver facilities with greater density that consume a growing volume of power and challenge conventional cooling.

The growth of hyperscale campuses risks colliding with a public increasingly aware of power and water consumption. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to accept.

A growing public awareness of data centres

The sector has entered an era of scale that would have seemed implausible a few years ago. Internet giants are investing billions of dollars in facilities that redefine large-scale and are reshaping the market. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services.

But their impact extends beyond the data centre industry: the communities hosting these ‘AI factories’ are being transformed, too.

This is leading to engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with power distribution systems and cooling infrastructure. As these sites become more visible, public awareness of the resources they consume is growing. The data centre has become a local landmark – and it’s under scrutiny.

Power versus perception

Power is one area receiving attention. Data centre growth is coinciding with the perception that hyperscale operators are competing for grid capacity or diverting renewable power that might otherwise support local decarbonisation. There is no shortage of coverage suggesting data centres are pushing up energy prices, too.

These perceptions have already had consequences. In the UK, a proposed 90 MW facility near London was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete for electricity with what one campaign group leader called “power-guzzling behemoth”. In Belgium, grid operator Elia may limit the power allocated to operators to protect other industrial users.

It would not be surprising to see this reaction continue in 2026, despite the steps taken by all data centre operators to maximise power efficiency and sustainability.

Cool misunderstandings 

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100kW. The amount of heat produced in such a dense space exceeds the capabilities of air-based cooling, driving the move to more efficient liquid systems.

Yet ‘liquid cooling’ is often interpreted by the public as ‘water cooling’, feeding a perception that data centres are draining natural water sources to cool servers.

In practice, this is rarely the case. While data centres of the past have relied heavily on evaporative cooling towers to deliver lower Power Usage Effectiveness, today we see a strong and consistent trend towards lower Water Usage Effectiveness through smarter cooling and sustainable design. Developments in technology are making water-free cooling possible, too, with half of England’s data centres using waterless cooling. Many operators use non-water coolants and closed-loop systems that conserve resources.

Data centres as part of the community 

Addressing public concerns will require a change in how operators think about their place in communities. Once built, a data centre becomes part of the local fabric and the company behind it, a neighbour. Developers need to view that relationship as more than transactional. They must demonstrate that growth is supported by resilient grids capable of meeting new demand without destabilising supply or driving up cost.

Water and power are essential resources, so public concern is understandable. It’s therefore important that operators show that density and efficiency can be achieved without disproportionate environmental impact. The continued rollout of AI-ready data centres will depend as much on social alignment as on advances in chip performance.

That alignment will be tested in 2026 and beyond as another wave of high-density deployments arrives. Based on NVIDIA’s product roadmap, we already have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure.

NVIDIA’s Chief Executive Jensen Huang introduced the DSX data centre architecture at GTC 2025 in Washington DC, a framework designed to make it easier for developers with limited experience to deploy large-scale, AI-ready facilities. In effect, it offers a global blueprint for gigawatt-scale ‘AI factories’.

A positive outcome of this will be a stronger push towards supply chain standardisation. Companies such as Vertiv, Schneider Electric and Eaton are aligning around modular power and cooling systems that are easily integrated into these architectures. Nvidia, AMD and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure can be deployed, the faster their chips can deliver the required compute capacity.

Standardisation, then, becomes a commercial and operational imperative, but it also reinforces the need for transparency and shared responsibility.

Efficiency and expansion 

Behind all of this lies the computational driver: the transformer model. These AI architectures process and generate language, code or other complex data at scale — the foundation of today’s generative AI. They are, however, enormously power-hungry, and even though it’s reasonable to expect a few DeepSeek-type breakthroughs in 2026 – discoveries that achieve similar performance with far less energy thanks to advances in algorithms, hardware and networking – we shouldn’t expect demand for power to drop.

The technical roadmap during 2026 is clear. We are heading towards greater density, wider uptake of liquid cooling and further standardisation. With data centres running as efficiently and sustainably as possible, developers and operators will need to establish trust with local stakeholders for the resources required to develop and power the AI factories that will drive a new era of industrial innovation.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

We’re going On the Record with a new column series

By:DCR
28 January 2026 at 15:20

Data Centre Review is launching a new monthly column series, dubbed On the Record, which will feature regular commentary from named contributors across the data centre industry.

The new series is designed to provide a spotlight to select voices to share perspectives on the issues shaping the sector, from resilience and energy regulation to skills and emerging technologies.

Data Centre Review has always been a town hall – a place where diverse opinions are allowed to shine, and that will continue. However, unlike one-off guest comment pieces, On the Record is structured as a recurring series, with contributors publishing on Data Centre Review each and every month. That gives our readers a consistent set of industry viewpoints to follow over time.

What to expect from On the Record

Each On the Record column will offer a direct, accountable viewpoint from a recognised organisation or specialist contributor. Topics will span the challenges and opportunities facing data centres today, including:

  • Design, build and operations best practice
  • Emerging trends and technology impacts
  • Energy, sustainability and regulation
  • Infrastructure and resilience
  • Skills, talent and leadership

We’re launching the series with two initial contributors: 

On the Record with the Data Centre Alliance – This column will bring an industry-wide perspective on standards, priorities and the big conversations influencing the sector.

On the Record with Critical Careers – This column will focus on careers and representation in the industry, with an emphasis on women entering data centres and the barriers that still exist

The first On The Record with the Data Centre Alliance is now live, exploring the topic of water scarcity and whether the UK’s data centre industry can do more when it comes to reducing its water usage. You can read that here. 

Additional contributors are expected to be added over time, expanding the range of organisations and topics represented within the series. 

DCR Predicts – UK data centres are booming – but is the power running out?

By:DCR
27 January 2026 at 08:00

A panel of experts explore why grid capacity, connection queues, and rising AI power density are starting to dictate what can be built in 2026 – and where.

The UK’s data centre boom is accelerating, fuelled by the AI gold rush. Hyperscalers are expanding campuses and investment continues to flow, but the practical limits of growth are becoming harder to ignore.

Data centres already account for around 2.5% of the UK’s electricity consumption, and with AI workloads accelerating, that could rise sharply. Power availability, grid connection delays, planning constraints and sustainability pressures are no longer background considerations. As 2026 approaches, they are actively shaping what can be built, where, and how.

Power limits are no longer theoretical

For years, efficiency improvements helped offset rising demand, but that buffer is tiring quickly as AI is pushing power density beyond what many facilities were designed to support.

Skip Levens, Quantum’s Product Leader and AI Strategist, the LTO Program, sees a clear roadblock ahead. “In 2026, AI and HPC data centre buildouts will hit a non-negotiable limit: they cannot get more power into their data centres. Build-outs and expansions are on hold and power-hungry GPU-dense servers are forcing organisations to make hard choices.”

He suggests that modern tape libraries could be the solution to two pressing problems, “First by returning as much as 75% of power to the power budget to ’spend’ on GPUs and servers, while also keeping massive data sets nearby on highly efficient and reliable tape technology.”

Whether or not operators adopt that specific approach, the wider point holds. Growth is no longer just about adding capacity – it’s about how power is allocated and conserved within fixed limits.

Sustainability under pressure

Sustainability remains a defining theme for the sector, but the pace of AI-driven expansion is testing how deeply those commitments are embedded.

Terry Storrar, Managing Director at Leaseweb UK, describes the balancing act many operators are facing, “Sustainability is still the number one topic in the data centre industry. This has to work for the planet, but also from an economic perspective.

“We can’t keep running huge workloads and adding these to the grid,” he warns, “it’s simply not sustainable for the long term. So, there is huge investment into how we make technology do more for less. In the data centre industry, this translates into achieving significant power efficiencies.”

Mark Skelton, Chief Technology Officer at Node4, agrees, warning, “Data centres already consume around 2% of national power, while unchecked growth could push that to 10-15%, at a time when the grid is already strained and struggling to keep pace with soaring demand. In some areas, new developments are being delayed simply because the grid cannot deliver the required capacity quickly enough.”

To put this into perspective, Google’s new Essex facility alone is estimated to emit the same amount of carbon as 500 short-haul flights every year.

Grid delays, planning and skills gaps

There’s also a broader question of how well prepared the UK actually is for such a rapid scale-up in data centre infrastructure,

“Currently, the rush to build is overshadowing the need for a comprehensive approach that considers how facilities draw power and utilise water, as well as how their waste heat could be repurposed for nearby housing or industry,” Node4’s Skelton continues. “The technology to do this already exists, but adoption remains limited because there is little incentive or regulation to encourage it.”

In the UK, high-capacity grid connections can take over a year to secure, while planning delays and local opposition add further friction. Another roadblock is that “communities will increasingly challenge data centre expansion over water and energy use,” warns Curt Geeting, Acoustic Imaging Product Manager at Fluke. This is “pushing operators toward self-contained microgrids, hydrogen fuel cells, and other alternative power sources. Meanwhile, a growing shortage of skilled technicians and electricians will become a defining constraint.”

Geeting believes automation and I will be key to tackling some of these infrastructure roadblocks. “The data centre test and measurement market will enter 2026 on the brink of a major transformation driven by speed, density, and intelligence. Multi-fibre connectivity will expand rapidly to meet the bandwidth demands of AI-driven workloads, edge computing, and cloud-scale growth.

“Very small form factor connectors, multi-core fibre, and even air-core fibre technologies will begin reshaping how data moves through high-density environments – enabling faster transmission with lower latency. At the same time, automation and AI will take centre stage in testing and diagnostics, as intelligent tools and software platforms automate calibration tracking, compliance verification, and predictive maintenance across vast, complex facilities.”

Edge, sovereignty and a rethink of scale

Data centres remain the backbone of the digital economy, underpinning everything from cloud services to AI and edge computing. With the rapid rise in AI, there are concerns that the UK will struggle to keep pace.

“The AWS outage reminded everyone how risky it is to depend too heavily on centralised cloud infrastructure,” urges Bruce Kornfeld, Chief Product Officer at StorMagic. “When a single technical issue can disrupt entire operations at a massive scale, CIOs are realising that stability requires balance.

“In 2026, more organisations will move toward proven on-premises hyperconverged infrastructure for mission-critical applications at the edge. This approach integrates cloud connectivity to simplify operations, strengthen uptime and deliver consistent performance across all environments. AI will continue to accelerate this shift.”

“The year ahead will favour a shift toward simplicity, uptime and management,” he adds. “The organisations that succeed will be those that figure out how to avoid downtime with simple and reliable on-prem infrastructure to run local applications. These winners understand that chasing scale for its own sake does nothing but put them in a vulnerable position.” This redistribution may ease pressure on hyperscale campuses.

Looking to 2026

Looking ahead to 2026, the pressures facing UK data centres are unlikely to ease. Power constraints, grid delays and sustainability expectations are becoming long-term issues, not just temporary obstacles. While technologies like quantum computing may eventually reshape infrastructure design, they won’t resolve the immediate challenges operators face today. The UK still has an opportunity to lead in AI and digital infrastructure, but only if growth is planned with constraint in mind. Without clearer coordination, incentives and accountability, the rush to build risks locking inefficiencies into the system for years to come. 

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

How the West Lost the Automotive Industry

26 January 2026 at 02:22

By David Waterworth and Paul Wildman Yes, past tense. The West has already lost the dominance of the global auto industry. Why? And will the USA become the new Cuba? Recently, my writing colleague, Dr Paul Wildman, contacted me and suggested we explore these topics. What is the West’s capability ... [continued]

The post How the West Lost the Automotive Industry appeared first on CleanTechnica.

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

Creating Critical Facilities Manpower Pipelines for Data Centers

23 December 2025 at 15:00

The digital technology ecosystem and virtual spaces are powered by data – its storage, processing, and computation; and data centers are the mitochondrion on which this ecosystem depends. From online gaming and video streaming (including live events) to e-commerce transactions, credit and debit card payments, and the complex algorithms that drive artificial intelligence (AI), machine learning (ML), cloud services, and enterprise applications, data centers support nearly every aspect of modern life. Yet the professionals who operate and maintain these facilities like data center facilities engineers, technicians, and operators, remain largely unsung heroes of the information age.

Most end users, particularly consumers, rarely consider the backend infrastructure that enables their digital experiences. The continuous operation of data centers depends on the availability of adequate and reliable power and cooling for critical IT loads, robust fire protection systems and tightly managed operational processes that together ensure uptime, and system reliability. For users, however, the expectation is simple and unambiguous; online services must work seamlessly and be available whenever they are needed.

According to the Data Center Map, there are 668 data centers in Virginia, more than 4000 in the United States, and over 11,000 globally. Despite the rapid growth, the industry faces a significant challenge: it is not producing enough qualified technicians, engineers, and operators to keep pace with the growth of data center infrastructure in the United States despite an average total compensation of $70,000 which may go as high as $109,000 in Northern Virginia, as estimated by Glassdoor.

Data center professionals require highly specialized electrical and mechanical maintenance skills and knowledge of network/server operations gained through robust training and hands-on experience. Sadly, the industry risks falling short of its workforce needs due to the unprecedented scale and speed of data center construction. This growth is being fueled by the global race for AI dominance, increasing demand for digital connectivity, and the continued expansion of cloud computing services.

Industry projections highlight the magnitude of the challenge. Omdia (As reported by Data Center Dynamics) suggests data center investment will likely hit $1.6 trillion by 2030 while BloombergNEF forecasts data-center demand of 106 gigawatts by 2035. All these projects and projections demand skilled individuals which the industry does not currently have, and the vacuum might create problems in the future if not filled with the right individuals. According to the Uptime Institute’s 2023 survey, 58% of operators are finding it difficult to get qualified candidates and 55% claim they are having difficulty retaining staff. The Uptime Institute’s 2024 data center staffing and recruitment survey shows that there was 26% and 21% turnover rate for electrical and mechanical trades respectively. It was estimated by The Birmingham Group that AI facilities will create about 45,000 data center technicians and engineers jobs and employment is projected to be at 780,000 by 2030.

Meeting the current and future workforce demands requires both leveraging talent pipelines and creating new ones. Technology is growing and evolving at a high speed and filling critical data center positions increasingly demands professionals who are not only technically skilled, but also continuously trained to keep up with rapidly changing industry standards and technologies

Organizational Apprenticeship and Training Programs

Organizations should invest in organizational training and apprenticeship programs for individuals with technical training from community colleges so that they can create pipelines of technically skilled individuals to fill critical positions. This will ensure the future of critical positions within the data center industry is secured.

Trade Programs Expansion in Community College

Community colleges should expand the teachings of technical trades because these programs create life-sustaining careers with the possibility of earning high incomes. Northern Virginia Community College has spearheaded data center operations programs to train individuals who can comfortably fill entry level data center critical facilities positions in northern Virginia and everywhere else.

Veterans Re-entry Programs 

A lot of military veterans possess the required transferrable skills needed within data center critical facilities, and organizations need to leverage this opportunity. Organizations need to harness the opportunities provided by the Disabled American Veterans and DOD’s Transition Assistance Program, and other military and DOD programs.

# # #

About the Author

Rafiu Sunmonu is the Supervisor of Critical Facilities Operations at NTT Global Data Centers Americas, Inc.

The post Creating Critical Facilities Manpower Pipelines for Data Centers appeared first on Data Center POST.

Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027

17 December 2025 at 14:00

As demand for AI, cloud, and hyperscale infrastructure accelerates across Europe, Nostrum Data Centers is advancing a new generation of sustainable, high-performance data center assets in Spain, with availability beginning in 2027.

The Spain-based developer is delivering more than 500 MW of IT capacity, supported by secured land and power, enabling customers to move quickly from planning to deployment. With 300 MW of power already secured and scalable to 500 MW, Nostrum is addressing Europe’s growing need for resilient, efficient digital infrastructure.

Earlier this month, Nostrum Data Centers, part of Nostrum Group, recently announced that AECOM will design and manage its $2.1 billion data center campus in Badajoz, one of six strategically located developments across the country. These sites leverage Spain’s strong subsea connectivity, competitive energy costs, and robust power availability to support scalable growth.

“Our Spain-based data centers combine strategic site selection, secured power connections, and AI-ready infrastructure to meet the demands of the next-generation digital economy,” said Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “Our team of industry leaders with over 25 years of experience are developing facilities that are not only highly efficient and scalable but also fully sustainable, supporting both our customers’ growth and global climate goals.”

Engineered for high-density AI and cloud workloads, Nostrum’s facilities are designed to achieve a PUE of 1.1 and a WUE of zero, eliminating water usage for cooling. Collectively, the developments are expected to prevent 10 million metric tonnes of CO2 emissions, aligning with the United Nations Sustainable Development Goals.

Nostrum’s 2027 delivery timeline reinforces its commitment to providing efficient, future-ready infrastructure across Spain for AI, cloud, and hyperscale customers.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027 appeared first on Data Center POST.

Building the Next Generation of Data Center Leaders: A Conversation with Luke Adams

12 December 2025 at 14:30

In the latest episode of NEDAS Live!, episode 63 features a fresh and vital perspective on the data center industry with Luke Adams, analyst at DPGlobal Assets and the co-founder of Data Center Youngbloods. Host, and CEO of iMiller Public Relations, Ilissa Miller explores how this young leader is paving the way for new talent and greater inclusivity in the digital infrastructure sector.​

Creating Opportunity in the Foundation of AI

DPGlobal Assets specializes in global digital infrastructure development, particularly data centers, from ideation through operation. Adams, who transitioned from being a college graduate to an industry analyst, shares what drew him to the sector: the realization that data centers are at the heart of the AI revolution and the backbone of the digital world. “Data centers are the reason that ChatGPT exists, and they’re the reason that AI will continue to skyrocket,” Adams explains, reflecting on how the sector’s unseen complexity offers immense opportunities for recent graduates willing to learn and grow.​

Launching Data Center Youngbloods

Noting the disconnect between academia and the industry, Adams co-founded Data Center Youngbloods with his brother to fix the pipeline. Adams observed that most industry events were filled with seasoned professionals, making young entrants feel like the odd ones who were out of place. Data Center Youngbloods aims to make digital infrastructure careers visible, accessible, and welcoming by bridging the workforce gap and connecting newcomers with mentorship, certification pathways, and a growing peer community. “We’re building the community that I wish existed when I first started out,” says Adams.​

Empowerment, Mentorship, and Debunking Myths

Adams also highlights the power of mentorship and networking. Young professionals often get discouraged by strict experience requirements, but he urges them to be curious, proactive, and fearless in asking questions. He credits mentorship for his rapid growth and emphasizes that skills and knowledge can be gained on the job with the right attitude. Data Center Youngbloods is cultivating in-person events, virtual meetings, and access to supportive mentors, resources that Adams lacked when he began.​

Driving Change One Conversation at a Time

As Data Center Youngbloods’ network expands, Adams’s message centers on paying it forward and breaking down barriers for newcomers. The initiative welcomes both seasoned professionals and emerging talent, offering a booking portal for mentorship and building their community through LinkedIn and direct outreach. Adams’s core advice for future leaders: “Everything is learnable. Be proactive, get involved, and don’t be afraid to reach out, no matter your background”.​

To continue the conversation, listen to episode 63 of the podcast here.

The post Building the Next Generation of Data Center Leaders: A Conversation with Luke Adams appeared first on Data Center POST.

The Rising Risk Profile of CDUs in High-Density AI Data Centers

10 December 2025 at 17:00

AI has pushed data center thermal loads to levels the industry has never encountered. Racks that once operated comfortably at 8-15 kW are now climbing past 50-100 kW, driving an accelerated shift toward liquid cooling. This transition is happening so quickly that many organizations are deploying new technologies faster than they can fully understand the operational risks.

In my recent five-part LinkedIn series:

  • 2025 U.S. Data Center Incident Trends & Lessons Learned (9-15-2025)
  • Building Safer Data Centers: How Technology is Changing Construction Safety (10-1-2025)
  • The Future of Zero-Incident Data Centers (1ind0-15-2025)
  • Measuring What Matters: The New Safety Metrics in Data Centers (11-1-2025)
  • Beyond Safety: Building Resilient Data Centers Through Integrated Risk Management (11-15-2025)

— a central theme emerged: as systems become more interconnected, risks become more systemic.

That same dynamic influenced the Direct-to-Chip Cooling: A Technical Primer article that Steve Barberi and I published in Data Center POST (10-29-2025). Today, we are observing this systemic-risk framework emerging specifically in the growing role of Cooling Distribution Units (CDUs).

CDUs have evolved from peripheral equipment to a true point of convergence for engineering design, controls logic, chemistry, operational discipline, and human performance. As AI rack densities accelerate, understanding these risks is becoming essential.

CDUs: From Peripheral Equipment to Critical Infrastructure

Historically, CDUs were treated as supplemental mechanical devices. Today, they sit at the center of the liquid-cooling ecosystem governing flow, pressure, temperature stability, fluid quality, isolation, and redundancy. In practice, the CDU now operates as the boundary between stable thermal control and cascading instability.

Yet, unlike well-established electrical systems such as UPSs, switchgear, and feeders, CDUs lack decades of operational history. Operators, technicians, commissioning agents, and even design teams have limited real-world reference points. That blind spot is where a new class of risk is emerging, and three patterns are showing up most frequently.

A New Risk Landscape for CDUs

  • Controls-Layer Fragility
    • Controls-related instability remains one of the most underestimated issues in liquid cooling. Many CDUs still rely on single-path PLC architectures, limited sensor redundancy, and firmware not designed for the thermal volatility of AI workloads. A single inaccurate pressure, flow, or temperature reading can trigger inappropriate or incorrect system responses affecting multiple racks before anyone realizes something is wrong.
  • Pressure and Flow Instability
    • AI workloads surge and cycle, producing heat patterns that stress pumps, valves, gaskets, seals, and manifolds in ways traditional IT never did. These fluctuations are accelerating wear modes that many operators are just beginning to recognize. Illustrative Open Compute Project (OCP) design examples (e.g., 7–10 psi operating ranges at relevant flow rates) are helpful reference points, but they are not universal design criteria.
  • Human-Performance Gaps
    • CDU-related high-potential near misses (HiPo NMs) frequently arise during commissioning and maintenance, when technicians are still learning new workflows. For teams accustomed to legacy air-cooled systems, tasks such as valve sequencing, alarm interpretation, isolation procedures, fluid handling, and leak response are unfamiliar. Unfortunately, as noted in my Building Safer Data Centers post, when technology advances faster than training, people become the first point of vulnerability.

Photo Image: Borealis CDU
Photo by AGT

Additional Risks Emerging in 2025 Liquid-Cooled Environments

Beyond the three most frequent patterns noted above, several quieter but equally impactful vulnerabilities are also surfacing across 2025 deployments:

  • System Architecture Gaps
    • Some first-generation CDUs and loops lack robust isolation, bypass capability, or multi-path routing. Single points of failure, such as a valve, pump, or PLC drive full-loop shutdowns, mirroring the cascading-risk behaviors highlighted in my earlier work on resilience.
  • Maintenance & Operational Variability
    • SOPs for liquid-cooling vary widely across sites and vendors. Fluid handling, startup/shutdown sequences, and leak-response steps remain inconsistent and/or create conditions for preventable HiPo NMs.
  • Chemistry & Fluid Integrity Risks
    • As highlighted in the DTC article Steve Barberi and I co-authored, corrosion, additive depletion, cross-contamination, and stagnant zones can quietly degrade system health. ICP-MS analysis and other advanced techniques are recommended in OCP-aligned coolant programs for PG-25-class fluids, though not universally required.
  • Leak Detection & Nuisance Alarms
    • False positives and false negatives, especially across BMS/DCIM integrations, remain common. Predictive analytics are becoming essential despite not yet being formalized in standards.
  • Facility-Side Dynamics
    • Upstream conditions such as temperature swings, ΔP fluctuations, water hammer, cooling tower chemistry, and biofouling often drive CDU instability. CDUs are frequently blamed for behavior originating in facility water systems.
  • Interoperability & Telemetry Semantics
    • Inconsistent Modbus, BACnet, and Redfish mappings, naming conventions, and telemetry schemas create confusion and delay troubleshooting.

Best Practices: Designing CDUs for Resilience, Not Just Cooling Capacity

If CDUs are going to serve as the cornerstone of liquid cooling in AI environments, they must be engineered around resilience, not simply performance. Several emerging best practices are gaining traction:

  1. Controls Redundancy
    • Dual PLCs, dual sensors, and cross-validated telemetry signals reduce single-point failure exposure. These features do not have prescriptive standards today but are rapidly emerging as best practices for high-density AI environments.
  2. Real-Time Telemetry & Predictive Insight
    • Detecting drift, seal degradation, valve lag, and chemistry shift early is becoming essential. Predictive analytics and deeper telemetry integration are increasingly expected.
  3. Meaningful Isolation
    • Operators should be able to isolate racks, lines, or nodes without shutting down entire loops. In high-density AI environments, isolation becomes uptime.
  4. Failure-Mode Commissioning
    • CDUs should be tested not only for performance but also for failure behavior such as PLC loss, sensor failures, false alarms, and pressure transients. These simulations reveal early-life risk patterns that standard commissioning often misses.
  5. Reliability Expectations
    • CDU design should align with OCP’s system-level reliability expectations, such as MTBF targets on the order of >300,000 hours for OAI Level 10 assemblies, while recognizing that CDU-specific requirements vary by vendor and application.

Standards Alignment

The risks and mitigation strategies outlined above align with emerging guidance from ASHRAE TC 9.9 and the OCP’s liquid-cooling workstreams, including:

  • OAI System Liquid Cooling Guidelines
  • Liquid-to-Liquid CDU Test Methodology
  • ASTM D8040 & D1384 for coolant chemistry durability
  • IEC/UL 62368-1 for hazard-based safety
  • ASHRAE 90.4, PUE/WUE/CUE metrics, and
  • ANSI/BICSI 002, ISO/IEC 22237, and Uptime’s Tier Standards emphasizing concurrently maintainable infrastructure.

These collectively reinforce a shift: CDUs must be treated as availability-critical systems, not auxiliary mechanical devices.

Looking Ahead

The rise of CDUs represents a moment the data center industry has seen before. As soon as a new technology becomes mission-critical, its risk profile expands until safety, engineering, and operations converge around it. Twenty years ago, that moment belonged to UPS systems. Ten years ago, it was batteries. Now, in AI-driven environments, it is the CDU.

Organizations that embrace resilient CDU design, deep visibility, and operator readiness will be the ones that scale AI safely and sustainably.

# # #

About the Author

Walter Leclerc is an independent consultant and recognized industry thought leader in Environmental Health & Safety, Risk Management, and Sustainability, with deep experience across data center construction and operations, technology, and industrial sectors. He has written extensively on emerging risk, liquid cooling, safety leadership, predictive analytics, incident trends, and the integration of culture, technology, and resilience in next-generation mission-critical environments. Walter led the initiatives that earned Digital Realty the Environment+Energy Leader’s Top Project of the Year Award for its Global Water Strategy and recognition on EHS Today’s America’s Safest Companies List. A frequent global speaker on the future of safety, sustainability, and resilience in data centers, Walter holds a B.S. in Chemistry from UC Berkeley and an M.S. in Environmental Management from the University of San Francisco.

The post The Rising Risk Profile of CDUs in High-Density AI Data Centers appeared first on Data Center POST.

How Artificial Intelligence Is Redefining the Future of Global Infrastructure

3 December 2025 at 16:00

At infra/STRUCTURE Summit 2025, industry leaders from Inflect, NTT and NextDC explored how AI is accelerating development timelines, reshaping deal structures, and redrawing the global data center map.

The infra/STRUCTURE Summit 2025, held at The Wynn Las Vegas from October 15–16, 2025 convened the brightest minds in digital infrastructure to explore the seismic shifts underway in the age of artificial intelligence. Among the most forward-looking sessions was “AI Impact on Global Market Expansion Patterns,” a discussion that unpacked how AI is transforming where and how data centers are developed, financed, and operated worldwide.

Moderated by Swapna Subramani, Research Director, IMEA, for Structure Research, the panel featured leading executives including Mike Nguyen, CEO, Inflect; Steve Lim, SVP, Marketing & GTM, NTT Global Data Centers; Craig Scroggie, CEO and Managing Director, NEXTDC. Together, they examined how the explosive demand for AI compute power is pushing developers to rethink long-held assumptions about geography, energy, and risk.

AI Is Rewriting the Rules of Global Expansion

For decades, site selection decisions revolved around a handful of core variables: power cost, connectivity, and proximity to major user populations. But in 2025, those rules are being rewritten by the unprecedented scale of AI workloads.

Regions once considered secondary are suddenly front-runners. Scroggie noted how saturation in markets like Singapore and Hong Kong has forced expansion across Thailand, Indonesia, Malaysia, and India, each now racing to deliver power, land, and permitting capacity fast enough to attract global hyperscalers.

“You can’t build large campuses in Singapore anymore,” Scroggie said. “But throughout Southeast Asia, we’re seeing rapid acceleration as operators balance scale, sustainability, and access to emerging population centers.”

The panelists agreed that energy constraints, not capital, are now the primary limiting factor. “The short term is about finding locations where power exists at scale,” explained Scroggie. “The longer-term challenge is developing new storage and generation models to make that power sustainable.”

Geopolitics and Sovereignty Are Shaping Investment

AI’s global reach has also brought geopolitics and national sovereignty to the forefront of infrastructure strategy.

“We’re living in more challenging times than ever before,” said Nguyen, referencing chip export restrictions and international trade interventions. “AI is no longer just a technological conversation, it’s a matter of national defense and economic competitiveness.”

He noted that ongoing trade restrictions with China are reshaping who gets access to advanced chips and where they can be deployed. “The combination of geopolitical and local legislative pressures determines the future of global trade management,” Nguyen said.

As countries strengthen data sovereignty and privacy laws, regional differentiation is intensifying. “Every geography has a different view,” Nguyen continued. “Some nations are creating frameworks to enable AI and cross-border data sharing, others are locking down their ecosystems entirely.”

Scroggie echoed this, adding that sovereignty-driven strategies are driving a surge in localized buildouts. “We’re seeing more countries push to ensure domestic control of digital assets,” he said. “That’s changing the structure of global supply chains and creating ripple effects that extend well beyond national borders.”

The Industry’s Race Against Time

The conversation turned toward construction velocity, a challenge every developer feels acutely.

“Are we building fast enough?” Subramani, the moderator of the conversation asked.

“Simply put, no,” said Scroggie. “We can’t keep up with demand. Traditional 12-to-24-month build cycles no longer align with AI’s acceleration curve. We have to find a way to build differently.”

The group discussed the need for new modular construction methods, accelerated permitting, and AI-assisted project management to meet scale and speed requirements.

Nguyen framed it within the broader context of industrial history. “We are standing at the dawn of the next industrial revolution,” he said. “Just as steam, electricity, and the internet reshaped economies, AI will redefine global competitiveness. The countries that can deliver sustainable, affordable power will lead.”

He pointed to the “Jacquard Paradox” of AI infrastructure: the more intelligence we produce, the cheaper it becomes, and the more of it the world demands. “The hallmark of global competitiveness will be the unit cost of producing intelligence,” Ngyen explained. “That requires deep collaboration between developers, energy providers, and governments.”

Evolving Deal Structures Reflect a More Complex Market

The financial framework of data center development is also changing dramatically. Traditional “build-to-suit” models are giving way to more creative, multi-tiered partnerships as both hyperscalers and institutional investors seek flexibility and risk mitigation.

“There’s a diversity of players now entering the market, some with deep operational experience, others completely new to the space,” said Scroggie. “Everyone’s chasing the same megawatts, but their risk tolerance and credit profiles vary widely.”

Scroggie also described how education and transparency have become critical. “We’re constantly advising clients on what’s feasible and what’s not. Many are coming in with unrealistic expectations about speed, power, or pricing. It’s part of our job to bridge that gap.”

The consensus was clear: AI-driven demand has transformed data centers from real estate assets into strategic infrastructure platforms, with financial, political, and environmental implications far beyond the industry itself.

Looking Ahead: The Next Decade of AI-Driven Infrastructure

As the discussion drew to a close, the panelists reflected on the extraordinary pace of change. “AI is not replacing, it’s additive,” said Scroggie. “Every new workload, every new inference model adds demand. The scale we’re dealing with is unprecedented.”

In this new era, speed, sustainability, and sovereignty are the defining dimensions of competitiveness. The industry’s success will hinge on its ability to innovate faster than the challenges it faces, whether those are regulatory, environmental, or geopolitical.

“We’re building the highways of the digital era,” said Nguyen in closing. “And like every industrial revolution before it, those who solve the energy equation will lead the world.”

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post How Artificial Intelligence Is Redefining the Future of Global Infrastructure appeared first on Data Center POST.

DC Investors Are Choosing a New Metric for the AI Era

2 December 2025 at 16:00

The conversation around data center performance is changing. Investors, analysts, and several global operators have begun asking a question that PUE cannot answer, how much compute do we produce for every unit of power we consume. This shift is not theoretical, it is already influencing how facilities are evaluated and compared.

Investors are beginning to favor data center operators who can demonstrate not only energy efficiency but also compute productivity per megawatt. Capital is moving toward facilities that understand and quantify this relationship. Several Asian data center groups have already started benchmarking facilities in this way, particularly in high density and liquid cooled environments.

Industry organizations are paying attention to these developments. The Open Compute Project has expressed interest in reviewing a white paper on Power Compute Effectiveness, PCE, and Return on Invested Power, ROIP, to understand how these measures could inform future guidance and standards. These signals point in a consistent direction. PUE remains valuable, but it can no longer serve as the primary lens for evaluating performance in modern facilities.

PUE is simple and recognizable.

PUE = Total Facility Power ÷ IT Power

It shows how much supporting infrastructure is required to deliver power to the IT load. What it does not show is how effectively that power becomes meaningful compute.

As AI workloads accelerate, data centers need visibility into output as well as efficiency. This is the role of PCE.

PCE = Compute Output ÷ Total Power

PCE reframes performance around the work produced. It answers a question that is increasingly relevant, how much intelligence or computational value do we create for every unit of power consumed.

Alongside PCE is ROIP, the operational companion metric that reflects real time performance. ROIP provides a view of how effectively power is being converted into useful compute at any moment. While PCE shows long term capability, ROIP reflects the health of the system under live conditions and exposes the impact of cooling performance, density changes, and power constraints.

This shift in measurement mirrors what has taken place in other sectors. Manufacturing moved from uptime to throughput. Transportation moved from mileage to performance and reliability. Data centers, especially those supporting AI and accelerated computing, are now moving from efficiency to productivity.

Cooling has become a direct enabler of compute and not just a supporting subsystem. When cooling performance changes, compute output changes with it. This interdependence means that understanding the relationship between power, cooling capability, and computational output is essential for real world performance, not just engineering design.

PUE still matters. It reflects operational discipline, mechanical efficiency, and the overall metabolism of the facility. What it cannot reveal is how much useful work the data center is actually producing or how effectively it can scale under load. PCE and ROIP fill that gap. They provide a more accurate view of capability, consistency, and return on power, especially as the industry moves from traditional air cooled environments to liquid ready, high density architectures.

The next phase of data center optimization will not be defined by how little power we waste, but by how much value we create with the power we have. As demand increases and the grid becomes more constrained, organizations that understand their true compute per megawatt performance will have a strategic and economic advantage. The move from energy scarcity to energy stewardship begins with measuring what matters.

The industry has spent years improving efficiency. The AI era requires us to improve output.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post DC Investors Are Choosing a New Metric for the AI Era appeared first on Data Center POST.

Managed Infrastructure at a Crossroads: The Power-First Revolution

25 November 2025 at 19:30

How Data Center Developers Are Navigating the Clash Between Rapid AI Demand and Decades-Old Regulatory Frameworks

The data center industry has undergone a seismic shift that would have seemed unthinkable just a few years ago. Historically, operators selected prime real estate locations and then brought power to their facilities. Today, that equation has completely reversed: developers are taking their data centers to wherever power exists, fundamentally reshaping the geographic and strategic landscape of digital infrastructure.

At the infra/STRUCTURE Summit 2025, held October 15-16 at The Wynn Las Vegas, a distinguished panel gathered to explore this transformation and its profound implications. The session “Managed Infrastructure at a Crossroads” brought together experts from across the infrastructure ecosystem to discuss the challenges, opportunities, and regulatory complexities of the power-first era.

Moderated by Hadassa Lutz, Senior Consulting Analyst at Structure Research, the panel featured Gene Alessandrini, SVP of Energy & Location Strategy at CyrusOne; Allison Clements, Partner at ASG (Aspen Strategy Group) and former FERC Commissioner; and David Dorman, Director of Commercial Operations at APR Energy. Together, they examined how the industry is navigating unprecedented demand while working within, and sometimes around, regulatory frameworks that were never designed for this moment.

The 12-Month Transformation: From “Tier One Markets” to “Just Finding Power”

Alessandrini opened the discussion with a striking timeline that captured the industry’s rapid evolution. When he joined CyrusOne just 12 months prior, the focus was squarely on tier-one markets: Northern Virginia, Columbus, and Chicago. Traditional data center hubs with established infrastructure and connectivity.

“Six months into the job, a lot of new secondary markets started popping up, “Alessandrini explained. “Twelve months into the job, it’s like ‘just find power, and then we’ll figure it out.'”

This shift represents more than a change in strategy; it’s a fundamental reimagining of the industry’s priorities. “We really understand that the industry is constrained in the traditional markets, “Alessandrini continued. “Even the new secondary markets are getting tapped out way too quickly. We’ve gone from an industry that focused really on land acquisition with utility interconnection to now, 12 months later, where any source of power is on the table and location requirements are wide open.”

The traditional site selection checklist includes land first, power second, followed by water, workforce, and tax incentives. However, it’s been completely inverted where power is now number one, and land becomes secondary. “Geography is now wide open, “Alessandrini said. “Your decisions on where you’re going to build that next data center really diverge from all the patterns you followed before.”

The trade-off, he noted, is that scale becomes critical. “If you can get scale at that location, you somewhat offset the ecosystem of challenges when you start building data centers in farther-away locations.”

Bridging Solutions: APR Energy’s Rapid Deployment Model

Dorman of APR Energy provided insight into how power generation providers are responding to this urgent demand with innovative, fast-deployment solutions.

“We focus right now on bridging solutions, either a component solution that we can potentially build for a utility or standalone generation,” Dorman explained. “Our lead time from contract to start-up is about 30 to 90 days after receiving permits and everything else. We’re very quick, and what we bring is kind of a fast-to-market approach.”

APR Energy, with over 20 years in the industry, has developed a proven playbook for rapid deployment. Their equipment is scalable, delivered in blocks of 50 megawatts, 150 megawatts, or 300 megawatts depending on site requirements. “The larger the deployment, the more equipment you have, potentially the cheaper the price point,” Dorman additionally noted.

However, even with this rapid deployment capability, there are critical prerequisites. “The assumptions are: you have permits, you have a site that’s a viable site, and you have fuel supply, typically natural gas, connected close to the site,” Dorman said. “If not, there are providers out there that can bridge to a potential connection if there is some development needed.”

The role APR Energy plays is essential in the current environment: providing a bridging solution that fills the gap between immediate demand and the 24-60 months it typically takes for utility-scale generation or permanent solutions to come online.

The Regulatory Reality: A Clash of Cultures and Timelines

Allison Clements, bringing her perspective as a former Federal Energy Regulatory Commission (FERC) commissioner, offered a sobering assessment of the regulatory challenges facing the industry.

“What is fascinating to me is the clash of cultures between the regulated utility industry and the data center development industries,” Clements said. “Data center developers have a real estate background, they’re tech players coming in, they’re used to operating in actual markets with real supply and consumer choice. In the energy world, it just doesn’t work that way.”

Clements described the fundamental mismatch: “You’ve got a regulatory machine and incumbent incentives, and nothing takes less than 30 or 60 or 90 days. There’s this real lack of appreciation and understanding: Why do you have to move so quickly? Why are you moving so slowly?”

Clements emphasized that when data center developers enter the utility space, they’re stepping into one of the most heavily regulated industries in existence. “You might have one, two, or three regulators who have a hand in decisions when it comes to your interconnection, permitting, and water use. The market is trying to move so quickly, and these regulatory frameworks are just trying to catch up.”

Clements message was clear but empathetic: “These utility sectors aren’t dumb. They’re just built into a giant bureaucracy that wasn’t designed to enable the goals that we now have today. That’s true for state regulatory commissions and the Federal Regulatory Commission. We just weren’t set up for this.”

Alessandrini echoed this disconnect from the developer perspective: “Twelve months before I started, I never thought utilities were as separate from our world as you just described. But after living it for 12 months, I realized that our industry is moving much faster than the regulatory framework: utility markets, interconnection, everything is at a slower pace.”

The challenge, Alessandrini explained, is finding ways to bridge this reality gap. “We’re having lots of conversations, trying to bridge the understanding of what our facilities are, what their businesses are, and why we’re unfortunately not moving at the same pace. We’re working together to find ways that relieve pressure on their framework so they can be more comfortable making decisions that allow us to move faster.”

The Utility Incentive Problem: Capital Investment vs. Operational Efficiency

A critical issue the panel addressed was the fundamental structure of utility incentives, a system that may be working against the rapid expansion the industry needs.

“What’s often missed is the perception that utilities are incentivized incorrectly,” Lutz noted, asking the panelists to expand on whether utilities are rewarded more for capital spending than for optimization and efficiency.

Clements confirmed this concern is rooted in reality. “You have a regulatory system where the incumbent utilities have been given the franchise right to be a monopoly, and they make money by one: volumetric sales, so the more electrons they sell, the more money they make, and two: by capital investment, steel in the ground, generation or grid.”

This creates a structural problem: “A lot of times, efficient operations and opportunities like buying behind-the-meter generation don’t align with the utility business model.”

Dorman, drawing on his 13 years as a utility executive before moving to generation, offered a particularly insightful observation: “It’s funny we’re sitting here saying utilities want to invest in their rate base because that becomes their revenue stream. Yet now the behavior I know it’s not their intention but the way it appears is you don’t want the load anymore, so your rate base doesn’t grow. The new tariff structures appear to disincentivize load growth rather than incentivize it.”

This paradox sits at the heart of the industry’s current challenges: utilities structured to profit from capital investment and volume sales are implementing tariffs that may discourage the very load growth that should benefit them.

Cost Allocation: The Political Third Rail

Clements addressed one of the most politically sensitive issues facing the industry: who pays for what when data centers connect to the grid.

“Data center markets have come onto the system at rapid-fire pace in a moment where electricity prices were already rising,” Clements explained. “The grid has been underinvested in for a long time.”

She outlined three types of costs:

  1. Direct interconnection costs: the physical connection to the grid
  2. Indirect grid impact costs: taking up space on the grid that might impact economic constraints elsewhere
  3. The cost of the electron itself: the actual generation cost

“There’s a lot of discussion around cost allocation,” Clements said. “Data centers come in saying ‘we want to pay our fair share,’ and they do pay for direct costs like substations or switching circuits. But what they don’t pay for is residential customer increases in electricity prices or broader transmission development.”

Clements was careful to note this isn’t necessarily unfair, it’s how supply and demand markets are supposed to work. “You have new customers, new supply should come in, and it should all work out. The opportunity for data centers to lower costs is tremendous.”

The problem, she explained, is timing. “The underlying regulatory frameworks haven’t kept up. As a result, you see demand increasing and supply tightening because we haven’t had time for new supply to come in. These rising costs have been in some part because of data centers, but in large part were coming regardless.”

The political pressure is mounting. “Data center opposition is growing up in communities around the country,” Clements warned. “These are real people with cost concerns, and we need to take it seriously.”

The Scale of the Challenge: 128 Gigawatts by 2029

To put the industry’s challenge in perspective, Clements shared a sobering statistic: “The lowest forecast suggests we’ll have 128 new gigawatts of power demand for data centers. That’s the amount of power it takes to power the entire mid-Atlantic region, that includes Philadelphia, Washington D.C., and Chicago.”

Clements was blunt about the timeline constraints: “If you want power by Q4 2029 and you start today, you can’t build a lifecycle gas plant in that time. Maybe, if you’re lucky enough to secure modular equipment and you start procurement today, in 18 months you can have a solution.”

This reality is driving the search for alternatives and interim solutions, everything from bridging generation to demand flexibility to grid optimization technologies.

Being a “Good Citizen”: Beyond Just Big Power Solutions

When asked what it means to be a “good citizen” in this environment, the panelists emphasized the need for data center operators to look beyond simple power procurement.

Clements urged the industry to embrace innovation: “The hyperscalers want these innovative solutions. Think about opportunities beyond just the big power solutions. There’s hardware and software that helps run the grid more dynamically. We still run our grid like it’s in the 1980s era, no joke in the US.”

Clements also highlighted demand flexibility as a critical tool: “Data centers actually committing to some sort of proactive curtailment throughout the year. That’s hard, it might involve going offline for periods. There are trade-offs in each of these approaches, and each one introduces different risks into your transaction that may or may not be desirable.”

The panel also addressed community impact. “There’s a lot of opportunity for smart developers to give back to the community,” Clements said. “Fire stations, education, public services, these investments matter. We need to help communities understand which part of their electricity bills data centers are responsible for and what they’re doing to mitigate that impact.”

The New Geographic Reality: Data Centers in Unexpected Places

Alessandrini painted a picture of the industry’s evolving geography. “We’re possibly creating new markets for data centers because we’re taking data centers to places you’ve never seen before, the outskirts of Texas, Alabama, Wyoming, and all these other areas.”

This geographic expansion isn’t without challenges. These regions often lack the established ecosystems, workforce, connectivity, and supply chains that traditional markets offer. But when balanced against the availability of power at scale, the trade-offs become acceptable.

“The reality is the industry will continue to broaden,” Alessandrini said. “Power solutions will come from locations with access to gas supply that historically weren’t considered data center markets.”

The 24-to-60-Month Gap: A Bridge Too Far?

Perhaps the session’s most critical tension was captured in Alessandrini’s assessment of timeline misalignment.

“What I realized 12 months in is that instead of being more comfortable that the gap was closing, the gap is actually widening,” Alessandrini said. “We have a 24-month timeline to get facilities built and operational. The regulatory and utility side operates on a 36-to-60-month timeline.”

He was emphatic about the industry’s position: “We’re going to be there in 24 months, and you just tell us when you can join the party, whether that’s 60 months or 72 months. But guess what? We’re going to be there in 24 months.”

The question facing the industry is how to bridge this gap. “We’re going to build power plants, we’re going to build bridging solutions, we’re going to build all these things to allow data centers to get built based on the velocity of our industry,” Alessandrini said.

Dorman agreed: “If we can solve the problems as an industry and bring that 60 months down to 36 months, we still have this 24-month target that we just can’t let go of. We’re going to keep building.”

Looking Ahead: Nuclear, SMRs, and Long-Term Solutions

While the session focused heavily on near-term challenges and natural gas solutions, the panelists acknowledged that longer-term answers may include nuclear power and Small Modular Reactors (SMRs).

“The new SMRs are something which can come to market too,” Alessandrini noted. “We’re trying to wrap our heads around the new reality of data centers today and possibly creating new markets, including with emerging nuclear technologies.”

However, the timeline for commercialized SMR technology remains uncertain, making bridging solutions and interim approaches all the more critical.

Key Takeaways: Navigating the Power-First Era

The panel’s discussion revealed several critical insights for the data center industry:

  1. Power Has Become the Primary Site Selection Criterion: The traditional real estate-first approach is dead. Geography is now determined by power availability at scale, fundamentally reshaping the data center map.
  2. The Regulatory-Developer Timeline Gap Is Widening: Developers operate on 24-month cycles; regulators and utilities on 36-60-month cycles. This gap isn’t closing, it’s growing, forcing creative bridging solutions.
  3. Utility Incentive Structures Are Misaligned: Current regulatory frameworks reward utilities for capital investment and volumetric sales, which may not align with the rapid, efficient expansion the industry needs.
  4. Cost Allocation Is a Political Powder Keg: As residential electricity bills rise and data center development accelerates, community opposition is growing. The industry must proactively address cost concerns and community impact.
  5. Bridging Solutions Are Essential: With demand far outpacing utility-scale generation timelines, fast-deployment bridging solutions from providers like APR Energy are critical to keeping projects on track.
  6. Being a Good Citizen Requires More Than Paying Bills: Data center operators must embrace demand flexibility, support community initiatives, and invest in grid optimization technologies, not just consume power.
  7. The Scale Is Unprecedented: Meeting 128 gigawatts of new demand by 2029 will require every tool in the toolbox, traditional generation, bridging solutions, demand management, grid optimization, and potentially nuclear.
  8. Secondary and Tertiary Markets Are the New Frontier: Texas, Alabama, Wyoming, and other historically non-traditional data center locations are becoming viable, even preferred, due to power availability.

For operators, investors, policymakers, and community stakeholders, the message is clear: the data center industry is at an inflection point. The power-first revolution isn’t a temporary adjustment, it’s the new normal. Success will require unprecedented collaboration between developers, utilities, regulators, and communities to bridge the gap between digital infrastructure’s breakneck pace and the energy sector’s deliberate timelines.

As Alessandrini concluded: “This is a dynamic industry to be in. We’re going to keep building, we’re going to keep finding solutions, because the demand isn’t going away, it’s only accelerating.”

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Managed Infrastructure at a Crossroads: The Power-First Revolution appeared first on Data Center POST.

Submarine Networks World 2025: Advancing Global Connectivity Beneath the Waves

19 November 2025 at 19:30

Submarine Networks World 2025, held September 24–25 in Singapore, once again cemented its position as the premier global gathering for the subsea communications community. Bringing together leaders across undersea infrastructure, cable technology, and digital connectivity, this year’s event delivered fresh insights on innovation, collaboration, and the future of resilient global networks.

Event Overview

Hosted at the Sands Expo & Convention Centre, Submarine Networks World 2025 welcomed more than 1,000 attendees from across the industry, including cable operators, technology vendors, regulators, investors, and infrastructure developers. The program featured over 130 speakers and more than 70 sponsors and partners, including recognized industry leaders such as Nokia, Ciena, and Digital Realty. Keynotes, debates, technical theatre presentations, and high-value networking sessions created a dynamic environment for exchanging ideas and forecasting trends shaping the subsea ecosystem.

Key Themes and Highlights

Cable Resilience and Security

A central theme throughout the event was the industry’s increasing focus on resilience. Panels explored strategies for diversifying routes, improving fault detection, strengthening data openness, and protecting subsea assets from risks ranging from climate events to geopolitical tensions.

Technological Innovation

Speakers highlighted major advancements transforming the subsea landscape, including pluggable technologies for submarine networks, fiber sensing for predictive maintenance, and the evolution toward petabit-scale cable systems. These innovations mark an important shift as operators aim to deliver higher capacity with greater efficiency.

Scaling to Meet Demand

With global bandwidth needs accelerating due to cloud growth, AI workloads, and digital expansion, the conference underscored the pressing need for large-scale infrastructure development. Experts noted that traffic requirements could double by 2030, emphasizing the urgency for new systems, expanded routes, and increased investment.

Sustainability and Transparency

Sustainability also took center stage, with leaders calling for enhanced mapping practices, standardized open-data models, and more environmentally responsible construction. The conversation pointed toward building not only faster and stronger networks, but smarter and cleaner ones as well.

Regional Collaboration

Sessions highlighted the rising influence of emerging markets, particularly in the Asia-Pacific region. Indonesia stood out for showcasing its connectivity initiatives, unique subsea challenges, and growing leadership role in regional digital infrastructure.

Community Impact and Takeaways

Attendees praised the depth and relevance of the discussions, as well as the diversity of perspectives from C-suite executives to highly specialized engineers. The event reinforced a collective commitment to innovation, security, and global cooperation as the subsea community navigates rising demand and an increasingly complex operating environment.

Looking Ahead

Submarine Networks World 2025 reaffirmed its status as the definitive annual forum for subsea connectivity. By bringing together the industry’s brightest minds and boldest strategies, the event set the tone for continued progress heading into 2026 and beyond. With momentum building across technology, sustainability, and international partnership, the global subsea communications community is well positioned to meet the challenges and opportunities of the next decade.

To learn about the upcoming Submarine Networks World 2026 and to register for the event, visit www.terrapinn.com/conference/submarine-networks-world/index.stm.

The post Submarine Networks World 2025: Advancing Global Connectivity Beneath the Waves appeared first on Data Center POST.

How the West Lost the Automotive Industry

26 January 2026 at 02:22

By David Waterworth and Paul Wildman Yes, past tense. The West has already lost the dominance of the global auto industry. Why? And will the USA become the new Cuba? Recently, my writing colleague, Dr Paul Wildman, contacted me and suggested we explore these topics. What is the West’s capability ... [continued]

The post How the West Lost the Automotive Industry appeared first on CleanTechnica.

Advanced Partial Discharge Filtering Technology for a Smarter HV Lab

28 December 2025 at 05:48
In high-voltage laboratories, achieving accurate, repeatable, and IEC-compliant Partial Discharge (PD) measurements is becoming increasingly challenging. Modern test environments are exposed to significant RFI/EMI, switching spikes, harmonics, and ground noise, all of which overlap the IEC 60270 PD detection band (40–300 kHz). These disturbances can mask true PD activity, lead to false readings, and compromise […]

Panduit names Holly Garcia as Chief Commercial Officer

21 January 2026 at 11:00

Panduit has promoted Holly Garcia to Chief Commercial Officer, tasking her with leading the company’s global commercial strategy and customer-facing approach.

Garcia will report directly to Panduit President Marc Naese, with the appointment coming as the firm positions itself for growth across its electrical and network infrastructure markets.

“Holly has the vision and expertise to position our company for continued growth and success while deepening our customer relationships,” said Naese. 

“Holly’s leadership as Chief Commercial Officer will be instrumental in strengthening the customer experience and delivering the value our markets expect.”

Garcia most recently served as Vice President of Panduit’s Data Centre business unit, where she led growth and innovation initiatives and oversaw business strategy and new product introductions aimed at strengthening the company’s position in the data centre market.

Panduit said Garcia brings more than 25 years of experience across sales, marketing and business unit leadership.

“I’m honoured to take on the role of Chief Commercial Officer and excited to lead our commercial strategy during this time of growth,” explained Garcia. 

“Our team’s commitment to innovation and customer success has positioned us as a trusted partner globally, and I look forward to driving even greater value for our customers and stakeholders.”

❌