Reading view

LFB Group rebrands data centre division as Apx

LFB Group’s dedicated data centre division has rebranded to Apx, in a move the company says reflects the “complexity, pace and performance expectations” now defining the European data centre market.

The rebrand comes as operators and developers grapple with rising compute intensity, with AI deployments pushing rack densities higher and putting greater scrutiny on cooling performance and delivery timelines. In that environment, Apx says closer collaboration earlier in the design and build cycle – including co-engineering and pre-commissioning – is becoming increasingly important.

The name should also feel familiar. Apx has already been used by LFB Group before – with it naming an entire cooling infrastructure product series after it. Now, however, that name is going to be expanded to the whole division.

Apx will feature the familiar dedicated team from LFB Group, which was previously part of Lennox, so the experience that the company has gathered over the last 20 years will continue to be there – just under a new name. 

Why has LFB Group rebranded its data centre division to Apx? 

Given its established position in the market – why the rebrand? Well, the company says that Apx is all about market positioning. Not only has the company recently debuted three new products, but the company is keen to capitalise on the explosive growth that is occurring in the data centre market – especially in Europe. 

The company is positioning its strength on the pre-commissioning and early validation work, with capabilities it describes as spanning precision manufacturing, automated testing and climatic validation.

Matt Evans, CEO at Apx Data Centre Solutions, argued that the ability to validate performance earlier has become a differentiator as large projects are announced at pace. He noted, “The industry’s dams have well and truly burst, with billion dollar projects and developments being announced almost every week. Keeping on top of this demand though, has never been more important.

“Today, collaboration is everything. Operators are searching for partners who can offer them both flexibility and agility, enabling them to build for the future while reacting quickly to what’s happening right now. That’s where co-engineering becomes critical; by working with designers, contractors and operators from day one, we can shape decisions together, anticipate challenges and engineer solutions before they become problems.”

Evans added that front-loading engineering work is intended to reduce uncertainty once equipment reaches site. He continued, “While no one can predict what’s around the corner, one thing is clear: performance has to be proven earlier. It’s been one of our grounding principles since the start; the idea that pre-commissioning must be core to every product’s DNA. By front-loading engineering, validating performance up-front and removing uncertainty before components reach sites, we give operators the head space, and time, to meet the demand.

“The direction of travel is clear: scale, capacity and density. And I couldn’t be more excited about where we’ve taken this business. The new Apx name marks our next chapter, and it’s one we’re genuinely proud to be part of.”

While it has a new name, Apx will continue to sit within the wider LFB Group, which also includes HVAC specialist Redge and refrigeration business Friga-Bohn. The group says this structure provides industrial-scale manufacturing support and engineering expertise across refrigeration and mechanical disciplines.

Alongside the branding change, Apx is also expanding headcount. The company said it will recruit across project management, operations, controls, commissioning and sales support roles in France, Germany and the Netherlands. By 2027, its dedicated data centre team is expected to reach around 50 employees.

  •  

DCR Predicts: Is data sovereignty about to trigger a cloud rethink?

With regulators and boards paying closer attention to where sensitive data sits, Fred Lherault, Field CTO EMEA/Emerging Markets at Pure Storage, outlines why hybrid strategies and selective cloud repatriation are likely to accelerate as AI scales.

After two years of accelerated AI experimentation, rising expectations, and rapid vendor expansion, I believe 2026 will mark an important inflection point for organisations building modern data infrastructure. Many enterprises are now moving past the initial hype cycle and focusing on what is required to operationalise AI reliably and at scale.

That shift is already visible across customers evaluating how AI will integrate into production workflows. If we extrapolate from these trends, several themes are likely to influence how organisations design their data pipelines, storage architectures, and cloud strategies in the year ahead. The following reflects my perspective on how these dynamics may unfold.

From hype to production: data readiness and inference become the priority

While some organisations are still convincing themselves how essential AI is, most are now realistic about what they do, and, crucially, do not deploy. The switch in focus from training to inference means that, without a robust inference platform, and the ability to get data ready for AI pipelines, organisations are set to fail.

As AI inference workloads become part of the production workflow, organisations will have to ensure their infrastructure supports not just fast access, but also high availability, security, and non-disruptive operations. Not doing this will be costly, both from a results perspective, and an operational one.

However, most organisations are still struggling with the data readiness challenge. Getting data AI-ready requires going through many phases, such as data ingestion, curation, transformation, vectorisation, indexing, and serving. Each of these phases can typically take days or weeks, and delay the point when the AI project’s results can be evaluated by the business.

Organisations who care about using AI with their own data will focus on streamlining and automating the whole data pipeline for AI – not just for faster initial results evaluation, but also for continuous ingestion of newly created data, and iteration.

This remains one of the most significant barriers to AI adoption. Enterprise data is often dispersed across legacy systems, cloud environments, and archives, which makes it difficult to access and prepare at the speed AI workflows require. In 2026, we can expect this challenge to become more pronounced as organisations look to extract value from all of their data, regardless of location. Manual preparation will not scale to meet these requirements. Automated pipelines, richer metadata, and integrated data platforms will become essential foundations for organisations aiming to use AI with continuous, repeatable outcomes.

AI and data sovereignty will reshape cloud strategy, and accelerate selective repatriation

The dual issues of AI and data sovereignty are driving concerns about where data is stored, and how organisations can maintain trust, and guarantee access in the event of any issues. In order to extract value from AI, it is critical for organisations to know where their most important data is, and that it is ready for use.

Concerns about data sovereignty are also driving more organisations to reconsider their cloud strategy. Rising geopolitical tensions and regulatory pressure will shape nations’ data centre strategies in 2026 in response. Governments, in particular, want to minimise the risk that access to data could be used as a threat or negotiating tactic. Organisations should be similarly wary, and prepare themselves.

We are already seeing early indicators of this shift. Boards and regulators are paying closer attention to where sensitive and strategically important data resides, driven, in part, by evolving regulatory frameworks such as GDPR, DORA, and guidance emerging from the EU AI Act. This scrutiny is prompting many organisations to reassess cloud strategies that once prioritised cost or convenience over sovereignty and resilience.

As a result, hybrid models are likely to expand, with more AI-critical datasets and workloads positioned closer to where they can be governed, audited, and controlled. This is not a retreat from the cloud, but a more deliberate, workload-specific leveraging of it.

KubeVirt will scale into mainstream production

The recent changes to VMware licensing that followed Broadcom’s acquisition have kickstarted a conversation around alternative approaches to virtualised workloads. KubeVirt, which allows management of virtual machines through Kubernetes, provides one such alternative—a platform that encompasses both virtualisation and containerisation needs—and I expect it will take off in 2026.

The KubeVirt offering has matured to the point where it is suitable for enterprise needs. For many, moving to another virtualisation provider is a huge upheaval, and, while it may eventually save money, it always comes with a set of limitations and constraints, especially when it comes to everything that surrounds the virtualisation platform (data protection, security, networking, and so on).

KubeVirt enables organisations to leverage the growing Kubernetes ecosystem to more quickly realise the value in a platform which provides the capabilities to manage, orchestrate, and monitor not just VMs, but also containers, regardless of how the proportion of those evolves over time.

KubeVirt’s momentum reflects a broader shift in how organisations want to operate their infrastructure. As containerisation becomes standard and AI workloads scale, many teams are looking for a unified operational model that reduces complexity, and avoids long-term platform lock-in. Consolidating virtual machines and containers under a single control plane aligns with this direction.

If adoption increases as predicted, storage and data services will evolve in parallel, with greater demand for persistent, low-latency, Kubernetes-native storage that can support mixed-workload environments.

2026 will be about discipline, not disruption

If the past two years have been defined by rapid disruption, driven largely by AI, 2026 is likely to be a year where organisations prioritise the operational foundation required for long-term success. Enterprises will:

  • Move from AI experimentation to consistent, production-grade inference models
  • Modernise data pipelines to support continuous data readiness
  • Reassess cloud strategies with a sharper focus on sovereignty, governance, and resilience
  • Evaluate VMware alternatives, such as KubeVirt, which support a unified approach to virtual machines and containers

The organisations able to take these shifts in their stride will be best placed for success in 2026.

This article is part of our DCR Predicts 2026 series. The series will officially end on Monday, February 2 with a special bonus prediction.

DCR Predicts 2026
  •  

Saudi Arabia pivots NEOM ‘gigaproject’ to AI data centre hub

Saudi Arabia is reportedly preparing to scale back NEOM, its marquee ‘gigaproject’ on the Red Sea, with it instead looking to develop an AI data centre hub instead.

According to unnamed sources cited by a report in the Financial Times, Saudi Arabia will scale back its hugely ambitious NEOM megaproject to create a new livable region in the desert in the northwest of the country, on the Red Sea coast. The project was announced in 2017 by Crown Prince Mohammad Bin Salman and was a cornerstone of his Vision 2030. It was to cover about 26,500 square km, roughly the size of Belgium (see map below).

The image above, from 27 October 2024, shows Sindalah, a luxury island destination and the first physical showcase of NEOM.

image

NEOM was due for completion in 2030 and included plans for a city called The Line – a row of 500m tall skyscrapers stretching for some 200km. However NEOM suffered many delays and cost overruns, as well as criticism for potential environmental damage and being unrealistic, among other things.

In addition, Saudi Arabia is hosting the Expo international trade fair in 2030 and the football World Cup in 2034, both of which involve large scale investment. Work on NEOM was paused in 2025 while the government looked at its options in a year-long review which is scheduled to conclude in this quarter.

According to a report in the Financial Times, focus for the region will be more on industry, such as becoming a hub for data centres. Its location means sea water can be used for cooling and the Crown Prince is keen to make his country a leader in AI infrastructure – a hub for data centres to power AI – to attract inward investment and high profile international partners.

An unnamed source cited by the FT said the location had other advantages too, such as digital infrastructure and its position at the crossroad of three continents (Africa, Asia and Europe), plus almost limitless renewable energy and available land.

It’s not the first time NEOM has been touted as potentially playing host to data centres, with DataVolt committing $5 billion DataVolt to develop a new 1.5 GW net zero AI campus at NEOM’s Oxagon. That was expected to come online in 2028, but it’s unknown if it’ll be impacted by the planned rethink for the NEOM area.

This article originally appeared on Mobile Europe, with additional commentary from Data Centre Review.

  •  

Lanarkshire becomes Scotland’s first AI Growth Zone, UK’s fifth

Lanarkshire has been named the UK’s latest AI Growth Zone, with the UK Government backing a major expansion around DataVita’s data centre site in the area. 

This is the first AI Growth Zone located in Scotland, which has long been positioned as an ideal area to host one – given the abundance of renewable power that is available in the region. The Scottish Government has also been keen to promote the area in hopes of developing it into a leading zero-carbon, cost-competitive green data centre hub. 

The Lanarkshire AI Growth Zone, which is the fifth AIGZ to be announced, is set to be based around DataVita’s campus, with the Scottish data centre firm delivering the site in partnership with AI cloud provider CoreWeave. That’s slightly different from other sites, which have often been positioned around multiple data centre operators, such as the North East Growth Zone, which is being centred around expansions to existing campuses from Cobalt Park Data Centres and the QTS Cambois. 

Despite being centred around the one expanded campus, the UK Government still has big hopes for the site. In fact, it’s hoped that the site will bring more than 3,000 jobs to the area over the coming years, including 50 apprenticeships. Around 800 roles are expected to be higher-paid AI and digital infrastructure jobs, spanning everything from research and software to permanent staff running and maintaining data centres, with the remainder tied to construction and site development.

Alongside job creation, ministers are pointing to £8.2 billion of private investment, plus a community fund worth up to £543 million over the next 15 years, which the Government says will be raised as data centre capacity comes online.

What’s being built as part of the Lanarkshire AI Growth Zone

The Lanarkshire AI Growth Zone may be centred around DataVita and CoreWeave’s partnership, but that doesn’t mean it’s just a single facility. To the contrary, the site is expected to feature 100MW of AI-ready data centre capacity, over 1GW of renewable energy infrastructure connected via private wire, and ‘Innovation Parks’ intended to attract adjacent industries that want proximity to large-scale compute.

That extra power will be key to the deployment of this latest AI Growth Zone, with it seen as a key tenet of gaining the designation, but it should also go some way towards helping reduce public opposition. Another data centre located to the south of Glasgow in Hulford has seen intense local opposition due to its enormous power demands, with residents outraged that the site wouldn’t even need to calculate the environmental impact on the local area. 

DataVita and CoreWeave will be keen to avoid the same backlash – which is why the companies are leaning heavily on a whole host of sustainability claims for its Lanarkshire AI Growth Zone. As well as using renewable energy to help power the site, the two firms also plan to make use of waste heat. 

The current plan is that excess heat from cooling systems could, in time, be redirected to support the nearby University Hospital Monklands, described as Scotland’s first fully digital and net zero hospital – though that element is presented as something to be explored once the site is fully up and running, rather than a guaranteed near-term deliverable.

That would be a huge win for advocates of heat networks, with a recent report suggesting that waste heat from UK data centres could heat 3.5m+ homes – it could also help the site win favour with local residents who are impacted by the plans. 

It’s not the only part of the plan that has been developed in a bid to win over residents. In fact, a community fund – worth up to £543 million over 15 years – will also be set up to support local programmes ranging from skills and training packages through to after-school coding clubs and support for local charities and foodbanks. 

DataVita’s parent company, HFD Group, is also expected to contribute £1 million per year to local charities and community groups, on top of the Growth Zone community funding mechanism.

Industry reaction

Commenting on plans for the first AI Growth Zone in Scotland, the UK’s Technology Secretary Liz Kendall noted, “Today’s announcement is about creating good jobs, backing innovation and making sure the benefits AI will bring can be felt across the community – that’s how the UK government is delivering real change for the people of Scotland.

“From thousands of new jobs and billions in investment through to support for local people and their families, AI Growth Zones are bringing generation-defining opportunities to all corners of the country.”

Danny Quinn, Managing Director of DataVita, added, “Scotland has everything AI needs – the talent, the green energy, and now the infrastructure. But this goes beyond the physical build. We’re creating innovation parks, new energy infrastructure, and attracting inward investment from some of the world’s leading technology companies. 

“This is a real opportunity for North Lanarkshire, and we want to make sure local people share in it. The £543 million community fund means the benefits stay here – good jobs, new skills, and investment that actually reaches the people who live and work in this area.”

Schneider Electric’s Matthew Baynes, VP, Secure Power and Data Centres, Schneider Electric, UK & Ireland, concluded, “In the twelve months since the introduction of the AI Opportunities Action Plan, the UK has seen much progress towards its AI ambitions.

“The new AI Growth Zone (AIGZ) announced today in Lanarkshire demonstrates just how far the country has come in its plans to build a sovereign AI nation, with Scotland becoming a critical new infrastructure hub and joining those in Wales, Oxfordshire, and the Northeast of England.

“Furthermore, the country has now secured more than £31B in investment from some of the world’s largest, leading tech companies, demonstrating that the UK has the people, resources and ambition to make AI a centrepiece of a new and revitalised Industrial Strategy.

“While this can be considered a success in many respects, there is still much work to do. Access to renewable power remains one of the biggest hurdles facing many parts of the country, and as the UK’s energy technology partner for data centres and AI Infrastructure, we believe there is a clear opportunity to catalyse the both the AI and green transitions by turning data centres into the energy centres of the future – fast-tracking new developments with behind-the-meter power generation and microgrids.

“Furthermore, the AIGZ announced today could not be more timely. We believe Scotland, with its cool temperate climate and rich conditions to generate renewable energy, provides a key opportunity to create secure, scalable and sustainable infrastructure capable of galvanising the AI race. Now, the UK’s sustainability and AI ambitions must work together hand-in-glove, demonstrating that today’s technology can be a catalyst for a greener future, powered by AI.”

  •  

DCR Predicts: The new bottleneck for AI data centres isn’t technology – it’s permission

As gigawatt-scale sites move from abstract infrastructure to highly visible ‘AI factories’, Tate Cantrell, Verne CTO, argues that grid capacity, water myths, and local sentiment will decide what actually gets built.

The industry in 2026 will need to get ready for hyper-dense, gigawatt-scale data centres, but preparation will be more complicated than purely infrastructure design. AI’s exploding computational demand is pushing designers to deliver facilities with greater density that consume a growing volume of power and challenge conventional cooling.

The growth of hyperscale campuses risks colliding with a public increasingly aware of power and water consumption. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to accept.

A growing public awareness of data centres

The sector has entered an era of scale that would have seemed implausible a few years ago. Internet giants are investing billions of dollars in facilities that redefine large-scale and are reshaping the market. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services.

But their impact extends beyond the data centre industry: the communities hosting these ‘AI factories’ are being transformed, too.

This is leading to engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with power distribution systems and cooling infrastructure. As these sites become more visible, public awareness of the resources they consume is growing. The data centre has become a local landmark – and it’s under scrutiny.

Power versus perception

Power is one area receiving attention. Data centre growth is coinciding with the perception that hyperscale operators are competing for grid capacity or diverting renewable power that might otherwise support local decarbonisation. There is no shortage of coverage suggesting data centres are pushing up energy prices, too.

These perceptions have already had consequences. In the UK, a proposed 90 MW facility near London was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete for electricity with what one campaign group leader called “power-guzzling behemoth”. In Belgium, grid operator Elia may limit the power allocated to operators to protect other industrial users.

It would not be surprising to see this reaction continue in 2026, despite the steps taken by all data centre operators to maximise power efficiency and sustainability.

Cool misunderstandings 

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100kW. The amount of heat produced in such a dense space exceeds the capabilities of air-based cooling, driving the move to more efficient liquid systems.

Yet ‘liquid cooling’ is often interpreted by the public as ‘water cooling’, feeding a perception that data centres are draining natural water sources to cool servers.

In practice, this is rarely the case. While data centres of the past have relied heavily on evaporative cooling towers to deliver lower Power Usage Effectiveness, today we see a strong and consistent trend towards lower Water Usage Effectiveness through smarter cooling and sustainable design. Developments in technology are making water-free cooling possible, too, with half of England’s data centres using waterless cooling. Many operators use non-water coolants and closed-loop systems that conserve resources.

Data centres as part of the community 

Addressing public concerns will require a change in how operators think about their place in communities. Once built, a data centre becomes part of the local fabric and the company behind it, a neighbour. Developers need to view that relationship as more than transactional. They must demonstrate that growth is supported by resilient grids capable of meeting new demand without destabilising supply or driving up cost.

Water and power are essential resources, so public concern is understandable. It’s therefore important that operators show that density and efficiency can be achieved without disproportionate environmental impact. The continued rollout of AI-ready data centres will depend as much on social alignment as on advances in chip performance.

That alignment will be tested in 2026 and beyond as another wave of high-density deployments arrives. Based on NVIDIA’s product roadmap, we already have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure.

NVIDIA’s Chief Executive Jensen Huang introduced the DSX data centre architecture at GTC 2025 in Washington DC, a framework designed to make it easier for developers with limited experience to deploy large-scale, AI-ready facilities. In effect, it offers a global blueprint for gigawatt-scale ‘AI factories’.

A positive outcome of this will be a stronger push towards supply chain standardisation. Companies such as Vertiv, Schneider Electric and Eaton are aligning around modular power and cooling systems that are easily integrated into these architectures. Nvidia, AMD and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure can be deployed, the faster their chips can deliver the required compute capacity.

Standardisation, then, becomes a commercial and operational imperative, but it also reinforces the need for transparency and shared responsibility.

Efficiency and expansion 

Behind all of this lies the computational driver: the transformer model. These AI architectures process and generate language, code or other complex data at scale — the foundation of today’s generative AI. They are, however, enormously power-hungry, and even though it’s reasonable to expect a few DeepSeek-type breakthroughs in 2026 – discoveries that achieve similar performance with far less energy thanks to advances in algorithms, hardware and networking – we shouldn’t expect demand for power to drop.

The technical roadmap during 2026 is clear. We are heading towards greater density, wider uptake of liquid cooling and further standardisation. With data centres running as efficiently and sustainably as possible, developers and operators will need to establish trust with local stakeholders for the resources required to develop and power the AI factories that will drive a new era of industrial innovation.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026
  •  

We’re going On the Record with a new column series

Data Centre Review is launching a new monthly column series, dubbed On the Record, which will feature regular commentary from named contributors across the data centre industry.

The new series is designed to provide a spotlight to select voices to share perspectives on the issues shaping the sector, from resilience and energy regulation to skills and emerging technologies.

Data Centre Review has always been a town hall – a place where diverse opinions are allowed to shine, and that will continue. However, unlike one-off guest comment pieces, On the Record is structured as a recurring series, with contributors publishing on Data Centre Review each and every month. That gives our readers a consistent set of industry viewpoints to follow over time.

What to expect from On the Record

Each On the Record column will offer a direct, accountable viewpoint from a recognised organisation or specialist contributor. Topics will span the challenges and opportunities facing data centres today, including:

  • Design, build and operations best practice
  • Emerging trends and technology impacts
  • Energy, sustainability and regulation
  • Infrastructure and resilience
  • Skills, talent and leadership

We’re launching the series with two initial contributors: 

On the Record with the Data Centre Alliance – This column will bring an industry-wide perspective on standards, priorities and the big conversations influencing the sector.

On the Record with Critical Careers – This column will focus on careers and representation in the industry, with an emphasis on women entering data centres and the barriers that still exist

The first On The Record with the Data Centre Alliance is now live, exploring the topic of water scarcity and whether the UK’s data centre industry can do more when it comes to reducing its water usage. You can read that here. 

Additional contributors are expected to be added over time, expanding the range of organisations and topics represented within the series. 

  •  

How to avoid drowning in data at the expense of freshwater supplies

TechBuyer’s Astrid Wynne argues that as AI drives up cooling demand, water stewardship must become a core design principle – not an afterthought.

As artificial intelligence accelerates demand for data centre capacity, the conversation around sustainability is shifting. Energy efficiency has long dominated the agenda, but water, the silent resource underpinning cooling systems, has emerged as a critical concern.

Scoping the problem on site and throughout operations, and providing practical guidance to avoid extra strain on freshwater use, were key aims of The Data Centre Alliance’s Drowning in Data best practice paper, published in October 2025. Developed by leading industry experts, the paper explains how to avoid freshwater use, how to account for the water footprint of energy use, and how to maximise water efficiency in cooling systems.

Growing awareness of water scarcity

Water scarcity is no longer a distant threat. Today, four billion people experience severe water stress for at least one month each year, according to a 2025 World Economic Forum report. In the UK, the deficit between the infrastructure capacity to provide clean water and the demands placed on it by agriculture, housing and industrial needs is in the billions of litres a day. The growing number of data centres, and reports of their on-site water use, began to raise alarm bells in the mainstream press in early 2025.

With Keir Starmer’s announcement of projected ‘AI Growth Zones’ early in the year came articles from the BBC raising concern that the UK’s AI ambitions could lead to water shortages. While it is true that high-density computing drives up cooling requirements, there are also numerous technologies to address this.

Large evaporative cooling towers, which can consume tens of thousands of cubic metres a year, are not popular in the UK. By August, a techUK report had found that half of England’s data centres now use waterless cooling. Other reports also suggested that used water could be deployed to cool data centres.

Industry guidance

Just as with carbon emissions, data centre water consumption is an issue both on site and through the energy supply chain. The authors of the Drowning in Data paper recognised this early on and structured the guidance around water efficiency in the cooling system; the type of water drawn on site and how it can be treated; and the water footprint of the energy supply.

The paper shows that operators, vendors and policymakers are collaborating to tackle water use with the same rigour applied to energy efficiency—and recognises that it is a system with many moving parts.

The fundamentals of water stewardship

The paper outlines six actionable principles for reducing water impact. It also recognises that these are interrelated, and that they have a relationship with energy efficiency. A brief overview is given below:

  1. Evaluate cooling systems
    Not all cooling systems are created equal. Designs for a 5 MW data centre in London that involve cooling towers can be around 38,000 m³/year, whereas adiabatic coolers can be around 800 m³/year, and dry coolers would result in no direct water use. Selecting the right technology can cut water use by orders of magnitude.
  2. Minimise the water footprint of the energy used
    Beyond direct consumption, electricity generation carries an embedded water cost. No studies have yet defined the proportion for AI workloads, but studies on another intensive compute operation – Bitcoin – suggest that most of this sits in the energy footprint. Maximising energy efficiency, and using energy supplies with lower water footprints, is a key part of good water stewardship.
  3. Design with the surrounding environment in mind
    Cooling systems must take into account the surrounding environment in order to balance savings in direct water use (through reduced cooling demand) with indirect water waste through increased electricity use overall.
  4. Design with non-potable water in mind
    Grey water systems and rainwater harvesting can offset potable water demand, reducing strain on municipal supplies. However, different water qualities require different levels of electricity to make them suitable for cooling systems, and this needs to be considered.
  5. Apply systems thinking
    The surrounding community’s needs also play a part. In water-stressed areas, reducing direct water use will be a priority. In cooler, wetter areas, priority may shift towards the benefits of heat generation from the data centre—captured by direct-to-chip cooling and fed into district heating systems.
  6. Introduce circular economy principles for hardware refresh
    Extending IT equipment life and promoting reuse reduces embodied water in manufacturing – a hidden but significant component of total water impact. According to the Green Electronics Council, the manufacture of a single server requires 1,500–2,000 gallons of water.

Where next for water use in the data centre sector

Continuing press coverage in recent months shows that data centres are under scrutiny for their water use in a way that other sectors are not. A December 2025 article in The Guardian is one such example. With researchers increasingly turning towards the water footprint of AI, mainstream media is becoming more aware of indirect water consumption as a result of energy use.

No similar stories circulate about heavy industry or manufacturing, which are more established and more likely to fly under the radar. Whether or not this is fair is a moot point; water is the next frontier in data centre sustainability. As the industry scales to meet digital demand, water stewardship must become a core design principle, not an afterthought.

The Drowning in Data paper provides insight into how the sector can address this with an approach that balances operational resilience with environmental responsibility. However, it is just the start of a long, complex process of understanding impacts and balancing competing demands. The Data Centre Alliance welcomes suggestions and collaborations that can move the conversation forward. 

You can read the full paper and join the discussion at dcauk.org.

  •  

DCR Predicts: Is 2026 the year cloud customers take back control?

James Lucas, CEO at CirrusHQ, argues that cloud autonomy and ‘choice by default’ will accelerate as organisations push back on lock-in, cost shocks, and rigid contracts.

Over the last 12 months, we’ve seen more organisations recognise the value of the cloud. For us, there’s been a significant uptick in public sector organisations taking a cloud-native approach – something I expect will continue at pace into 2026.

As organisations realise the benefits of the cloud through smaller projects aligned with best practice, it’s encouraging to see them consider future migrations and deployments. But there are other developments I foresee over the next year.

Cloud autonomy will become a reality

Gone are the days when organisations wanted the security of a lengthy contract with a single vendor. Legacy vendor lock-in in the cloud remains a challenge for many – and we’ve seen a sharp rise in organisations being hit with significant cost hikes and lengthy contract extensions. Increasingly, they’re breaking away from the status quo and demanding cloud infrastructure that gives them the flexibility their business requires.

How organisations want to work with vendors has evolved significantly since many of those contracts were first signed. With cost and commitment under greater scrutiny, I expect more organisations will recognise the value of cloud marketplaces in 2026.

Marketplaces can give organisations the autonomy to pick and choose the services and tools they need, when they need them – without the pain of restriction. And when no one knows what might be around the corner from a macroeconomic or geopolitical perspective, organisations will increasingly seek to maintain control over the business operations that are within their power.

Shadow IT vs data sovereignty

Hyperscalers are creating and launching sovereign cloud offerings to guarantee where customer data is stored and processed. But organisations using cloud services must also ensure shadow IT doesn’t undermine sovereignty efforts or increase non-compliance. Enterprises need to take this seriously in 2026.

Many IT environments can benefit from stronger best practice – regardless of whether an organisation is pursuing something as complex as sovereign cloud. Much like the adage “if you don’t test your backups, you don’t have any,” in 2026 organisations should recognise that if they don’t have automated, detailed reporting on policy compliance, then they effectively don’t have it at all.

Without automated oversight, IT estates can become unwieldy, unmanageable, and non-compliant – and often end up duplicating work and data. By automating the detection of non-compliant activity, organisations can adopt a ‘shift left’ approach: addressing issues earlier in the process and keeping the environment secure and manageable.

AI and the cloud will be co-dependent

Unsurprisingly, AI will remain top of mind for organisations over the coming year. While many will look to AI to drive transformation, it will require a solid data foundation to thrive.

As we saw recently at AWS re:Invent, as the cloud enters a new phase of maturity in 2026, major platform investments will likely focus on three areas: advanced AI, data consolidation, and financial control.

From what we’re seeing in the wider market, cloud platforms will make AI development more dependable by automatically managing steps, fixing errors, and tracking complex jobs – dramatically improving the stability of AI tools and long-running workloads.

For those concerned about AI’s environmental impact over the coming year and beyond, the answer isn’t halting progress. It’s treating climate, power, and water considerations as measurable factors to be managed alongside performance and cost. Thoughtful choices around architecture, suppliers, and workload optimisation can help ensure AI delivers value while aligning with sustainability goals.

Ultimately, success in 2026 won’t just be measured by migration speed. It will be measured by whether organisations can combine the foundational stability of the cloud with proactive compliance – so technology decisions are considered, deliberate, and future-proof.

That means getting cloud systems ready to operate more efficiently and intelligently. Making the cloud work harder and deliver maximum value for the business is clearly the direction we’re headed – and it’s a positive shift I fully support.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026
  •  

Waste heat from UK data centres could heat 3.5m+ homes

Waste heat from the UK’s latest crop of data centres could be used to heat at least 3.5 million homes by 2035, according to new research that argues the country risks letting a major low-carbon heat source go unused without investment in heat network infrastructure.

The analysis, produced by heat mapping organisation EnergiRaven in partnership with Danish energy and sustainability consultancy Viegand Maagøe, links projected growth in data centres to a significant rise in recoverable ‘waste’ heat. It estimates that data centres could provide enough heat for between 3.5 million and 6.3 million homes by 2035, depending on factors including the efficiency and design of future facilities.

The research lands as the UK grapples with two parallel challenges: the rapid expansion of energy-hungry digital infrastructure to support cloud computing and AI, and the long-running difficulty of decarbonising heat – still dominated by gas boilers across much of the housing stock.

EnergiRaven argues that many existing and planned data centres are located close to proposed new towns and to communities facing higher levels of fuel poverty, raising the prospect of linking local heat demand with a growing heat supply that would otherwise be rejected into the atmosphere.

“Our national grid will be powering these data centres – it’s madness to invest in the additional power these facilities will need, and waste so much of it as unused heat, driving up costs for taxpayers and bill payers,” commented Simon Kerr, Head of Heat Networks at EnergiRaven.

“Microsoft has said it wants its data centres to be ‘good neighbours’. Giving heat back to their communities should be an obvious first step.”

How Manchester could be an ideal pilot

The report points to Greater Manchester as one area where this alignment could be particularly strong. It notes plans for around 15,000 homes at the Victoria North development and a further 14,000-20,000 at Adlington, alongside clusters of fuel poverty.

At the same time, the analysis highlights a concentration of data centre infrastructure around the city region, including more than a dozen existing sites and four additional facilities planned. EnergiRaven argues that, in theory, this proximity could make it easier to connect heat sources and new developments – provided heat networks are planned early enough, and built at sufficient scale.

More broadly, the research suggests the same pattern appears across the UK: growth in data centres is expected to increase the amount of recoverable heat, but the ability to use it will depend on whether networks exist to move that heat into nearby homes and buildings.

How heat networks work

Capturing waste heat typically requires a heat network: insulated pipework that transports hot water from a heat source to buildings, where heat interface units (HIUs) can replace individual gas boilers. The report notes that waste heat recovery is widely used across parts of northern Europe, particularly in Nordic countries, where major sources of waste heat — including data centres, power stations and other industrial processes — are more routinely integrated into district heating systems.

In the UK, heat networks remain a comparatively small part of the heating mix, but policy has been moving to encourage growth. Some cities have already been designated as ‘Heat Network Zones’, where heat networks are assessed as the cheapest low-carbon option for decarbonising heat locally.

Regulatory changes are also on the horizon. Ofgem is due to take over regulation of heat networks in 2026, and new technical standards will be introduced through the Heat Network Technical Assurance Scheme (HNTAS), intended to improve consumer protections and investor confidence.

The Government’s recent Warm Homes Plan also includes a target to double the share of heat demand met by heat networks in England to 7% (27 TWh) by 2035, with a longer-term expectation that heat networks could supply around a fifth of all heat by 2050. It also pledges £195 million per year through the Green Heat Network Fund to support heat network development.

However, EnergiRaven argues that current policy settings still fall short of what would be needed to take full advantage of large-scale waste heat from data centres.

“Current policy in the UK is nudging us towards a patchwork of small networks that might connect heat from a single source to a single housing development. If we continue down this road, we will end up with cherry-picking and small, private monopolies – rather than national infrastructure that can take advantage of the full scale of waste heat sources around the country,” Kerr added.

“We know that investment in heat networks and thermal infrastructure consistently drives bills down over time and delivers reliable carbon savings, but these projects require long-term finance. Government-backed low-interest loans, pension fund investment, and institutions such as GB Energy all have a role to play in bridging this gap, as does proactivity from local governments, who can take vital first steps by joining forces to map out potential networks and start laying the groundwork with feasibility studies.”

A “heat highways” argument — and what it would change

A central recommendation in the analysis from EnergiRaven is the need for larger, strategic networks – which it describes as ‘Heat Highways’ – capable of transporting waste heat over longer distances and linking multiple sources and demand centres. The report suggests that smaller, isolated schemes may struggle to exploit the growing scale of data centre waste heat, particularly as facilities cluster in certain regions rather than being evenly spread across the UK.

Viegand Maagøe’s Peter Maagøe Petersen argues that building larger thermal networks could also provide benefits beyond household heating, including grid balancing and energy security.

“We should see waste heat as a national opportunity. In addition to heating homes, heat highways can also reduce strain on the electricity grid and act as a large thermal battery, allowing renewables to keep operating even when usage is low, and reducing reliance on imported fossil fuels. As this data shows, the UK has all the pieces it needs to start taking advantage of waste heat – it just needs to join them together,” he noted.

“With denser cities than its Nordic neighbours, and a wealth of waste heat on the horizon, the UK is a fantastic place for heat networks. It needs to start focusing on heat as much as it does electricity – not just for lower bills, but for future jobs and energy security.”

The underlying message from both organisations is blunt: data centre growth is already being planned and powered. The question is whether the UK will treat the heat those facilities inevitably produce as a resource – or continue to design energy infrastructure that ignores it.

  •  

DCR Predicts – UK data centres are booming – but is the power running out?

A panel of experts explore why grid capacity, connection queues, and rising AI power density are starting to dictate what can be built in 2026 – and where.

The UK’s data centre boom is accelerating, fuelled by the AI gold rush. Hyperscalers are expanding campuses and investment continues to flow, but the practical limits of growth are becoming harder to ignore.

Data centres already account for around 2.5% of the UK’s electricity consumption, and with AI workloads accelerating, that could rise sharply. Power availability, grid connection delays, planning constraints and sustainability pressures are no longer background considerations. As 2026 approaches, they are actively shaping what can be built, where, and how.

Power limits are no longer theoretical

For years, efficiency improvements helped offset rising demand, but that buffer is tiring quickly as AI is pushing power density beyond what many facilities were designed to support.

Skip Levens, Quantum’s Product Leader and AI Strategist, the LTO Program, sees a clear roadblock ahead. “In 2026, AI and HPC data centre buildouts will hit a non-negotiable limit: they cannot get more power into their data centres. Build-outs and expansions are on hold and power-hungry GPU-dense servers are forcing organisations to make hard choices.”

He suggests that modern tape libraries could be the solution to two pressing problems, “First by returning as much as 75% of power to the power budget to ’spend’ on GPUs and servers, while also keeping massive data sets nearby on highly efficient and reliable tape technology.”

Whether or not operators adopt that specific approach, the wider point holds. Growth is no longer just about adding capacity – it’s about how power is allocated and conserved within fixed limits.

Sustainability under pressure

Sustainability remains a defining theme for the sector, but the pace of AI-driven expansion is testing how deeply those commitments are embedded.

Terry Storrar, Managing Director at Leaseweb UK, describes the balancing act many operators are facing, “Sustainability is still the number one topic in the data centre industry. This has to work for the planet, but also from an economic perspective.

“We can’t keep running huge workloads and adding these to the grid,” he warns, “it’s simply not sustainable for the long term. So, there is huge investment into how we make technology do more for less. In the data centre industry, this translates into achieving significant power efficiencies.”

Mark Skelton, Chief Technology Officer at Node4, agrees, warning, “Data centres already consume around 2% of national power, while unchecked growth could push that to 10-15%, at a time when the grid is already strained and struggling to keep pace with soaring demand. In some areas, new developments are being delayed simply because the grid cannot deliver the required capacity quickly enough.”

To put this into perspective, Google’s new Essex facility alone is estimated to emit the same amount of carbon as 500 short-haul flights every year.

Grid delays, planning and skills gaps

There’s also a broader question of how well prepared the UK actually is for such a rapid scale-up in data centre infrastructure,

“Currently, the rush to build is overshadowing the need for a comprehensive approach that considers how facilities draw power and utilise water, as well as how their waste heat could be repurposed for nearby housing or industry,” Node4’s Skelton continues. “The technology to do this already exists, but adoption remains limited because there is little incentive or regulation to encourage it.”

In the UK, high-capacity grid connections can take over a year to secure, while planning delays and local opposition add further friction. Another roadblock is that “communities will increasingly challenge data centre expansion over water and energy use,” warns Curt Geeting, Acoustic Imaging Product Manager at Fluke. This is “pushing operators toward self-contained microgrids, hydrogen fuel cells, and other alternative power sources. Meanwhile, a growing shortage of skilled technicians and electricians will become a defining constraint.”

Geeting believes automation and I will be key to tackling some of these infrastructure roadblocks. “The data centre test and measurement market will enter 2026 on the brink of a major transformation driven by speed, density, and intelligence. Multi-fibre connectivity will expand rapidly to meet the bandwidth demands of AI-driven workloads, edge computing, and cloud-scale growth.

“Very small form factor connectors, multi-core fibre, and even air-core fibre technologies will begin reshaping how data moves through high-density environments – enabling faster transmission with lower latency. At the same time, automation and AI will take centre stage in testing and diagnostics, as intelligent tools and software platforms automate calibration tracking, compliance verification, and predictive maintenance across vast, complex facilities.”

Edge, sovereignty and a rethink of scale

Data centres remain the backbone of the digital economy, underpinning everything from cloud services to AI and edge computing. With the rapid rise in AI, there are concerns that the UK will struggle to keep pace.

“The AWS outage reminded everyone how risky it is to depend too heavily on centralised cloud infrastructure,” urges Bruce Kornfeld, Chief Product Officer at StorMagic. “When a single technical issue can disrupt entire operations at a massive scale, CIOs are realising that stability requires balance.

“In 2026, more organisations will move toward proven on-premises hyperconverged infrastructure for mission-critical applications at the edge. This approach integrates cloud connectivity to simplify operations, strengthen uptime and deliver consistent performance across all environments. AI will continue to accelerate this shift.”

“The year ahead will favour a shift toward simplicity, uptime and management,” he adds. “The organisations that succeed will be those that figure out how to avoid downtime with simple and reliable on-prem infrastructure to run local applications. These winners understand that chasing scale for its own sake does nothing but put them in a vulnerable position.” This redistribution may ease pressure on hyperscale campuses.

Looking to 2026

Looking ahead to 2026, the pressures facing UK data centres are unlikely to ease. Power constraints, grid delays and sustainability expectations are becoming long-term issues, not just temporary obstacles. While technologies like quantum computing may eventually reshape infrastructure design, they won’t resolve the immediate challenges operators face today. The UK still has an opportunity to lead in AI and digital infrastructure, but only if growth is planned with constraint in mind. Without clearer coordination, incentives and accountability, the rush to build risks locking inefficiencies into the system for years to come. 

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026
  •  

Prism Power Group eyes US acquisition to support booming data centre buildout

A UK-based specialist in electrical switchgear and critical power systems is expanding into the United States, as the surge in data centre construction strains power infrastructure and exposes shortages in both equipment and labour.

Prism Power Group, headquartered in Watford, is looking to purchase a US business that already holds UL certification and is raising $40 million to fund the acquisition and further expansion in the UK.

The move comes as developers attempt to keep pace with rising demand driven by artificial intelligence, cloud computing and other digital services – and as utilities and supply chains struggle to deliver connections and key components quickly enough. In fact, it’s estimated that just 25GW of grid capacity will come online in the next three years, leaving the industry 19GW short of the power it needs to realise its expansion plans. 

That’s why Prism Power Group wants to expand into the US. It says it has built its reputation delivering mechanical and electrical infrastructure for modular data centre projects in the UK and across Europe since 2005, including work spanning high-voltage substations, back-up generation and low-voltage switchboards for tightly scheduled turnkey developments. It

Adhum Carter Wolde-Lule, Director at Prism Power Group, explained, “The scale and urgency is such that America’s data centre expansion has become an international endeavour, and we’re again able to punch well above our weight in providing the niche expertise that’s missing and will augment strained local supply chains – on the ground, straight away.

“Major power manufacturers in the United States are ramping up production, while global giants have announced new stateside factories for transformers and switchgear components, aiming to cut lead times and ease the backlog – but those investments will take years to bear fruit and that is time the US data centre market simply doesn’t have.”

Keith Hall, CEO at Prism Power Group, added, “For overseas engineering companies like us with uniquely skilled contractors and technicians, plus a proven track-record in modular power systems that can be built off-site, the time is now and represents an exceptional opening into the world’s fastest-growing infrastructure market. Equally, for the US sector, the willingness to look globally for critical power systems excellence will prove vital in keeping ambitious build-outs on schedule and preventing the data centre explosion from hitting a capacity wall.”

Prism’s announcement taps into a wider trend: US developers increasingly look overseas for expertise and equipment, as domestic manufacturing and skills pipelines struggle to scale at the same pace as data centre growth.

According to figures, tech giants including Amazon, Google and Microsoft already operate more than 520 data centres across the US, with more than 400 additional facilities under construction or development. Industry analysts estimate that more than 100 GW of new data centre capacity could come online between 2024 and 2035 – a level of growth that is now exposing bottlenecks in both grid infrastructure and construction resources.

Those constraints are already being felt in the biggest markets.

In Northern Virginia, the largest data centre region in the country, project backlogs have reportedly contributed to multi-year delays for new power connections as utilities reinforce high-voltage infrastructure. Similar issues are emerging in Silicon Valley, where two large AI-focused facilities in Santa Clara are standing empty while the city-owned utility upgrades its grid and sequences power delivery as new substations come online, according to Prism. 

While the examples are region-specific, the underlying challenge is national: more capacity is being planned and built than the power system, supply chain and labour market can comfortably support at current speed.

However, it isn’t only hardware creating pressure. Prism says specialist electricians, installers and maintenance engineers are in such demand that contractors report backlogs of 12 months to staff new projects.

That matters because data centres need more than construction labour. Once operational, facilities require round-the-clock expertise to manage power distribution, cooling systems and emergency back-up power – and Prism warns that the talent pipeline is lagging behind the industry’s rapid growth. The concern, as analysts have repeatedly flagged in recent years, is that workforce constraints could affect both build schedules and long-term reliability.

Against that backdrop, Prism’s plan appears designed to remove one of the major market barriers for overseas entrants: US certification requirements. By acquiring a UL-certified business, rather than attempting to build a compliant operation from scratch, the company is aiming for a faster route into live projects, while also expanding its UK base as part of the same capital raise.

Prism has not disclosed which US market it will prioritise, or the size and specialism of the acquisition target, but the rationale is clear: the US data centre boom is forcing an international supply response, and companies able to deliver power infrastructure at pace are betting they can secure a role in that buildout.

  •  

DCR Predicts: Hybrid wins in 2026 – and storage has to catch up

BS Teh, Chief Commercial Officer at Seagate Technology, outlines the security, edge and cost pressures pushing organisations beyond cloud-first.

The speed at which data is generated, used and stored today is unprecedented, and it continues to grow. In 2026, this trend will accelerate further, placing even greater demands on businesses.

Globally, this is reshaping not only the IT landscape but also the way companies innovate. Data has long been the foundation for innovation: it enables the development of new business models, the automation of processes, and the customisation of products to meet individual customer needs.

Teams are increasingly data-driven, using intelligent analytics to make faster, more informed decisions. At the same time, new forms of collaboration are emerging, powered by AI tools that consolidate knowledge and foster creative exchange.

In the age of AI, the value of data is more evident than ever: it is the most important asset in the digital economy. AI algorithms rely on analysing large and diverse datasets to identify patterns, generate forecasts and create value.

The better companies capture, structure and store their data, the more effectively they can leverage AI’s potential. This means businesses capable of managing and storing large, complex datasets efficiently gain critical competitive advantages. Those able to handle data securely, flexibly and sustainably are laying the foundation for innovation, agility and long-term success.

As a result, the data storage industry is at a turning point. It must not only keep up with exponential data growth, but also deliver solutions that meet demands for sustainability, scalability and cost efficiency. This transformation is largely driven by rapid advances in AI, which generate and process ever-growing volumes of data and set new requirements for storage infrastructure.

Hybrid strategies for the next generation of the data economy

The role of AI as a growth driver and ‘data multiplier’ is undeniable. AI has made data the most valuable asset in the digital economy, prompting a fundamental shift in enterprise computing – one that is already shaping data centre planning and investment today, and will continue to do so in 2026.

Nearly 75% of business leaders are moving from a ‘cloud-first’ approach to a hybrid model that combines public cloud, private infrastructure and edge computing. The reasoning is clear: companies want to enhance security, enable real-time edge applications and reduce costs, while meeting the growing demands of AI-driven workloads.

The conclusion is simple: all data has value today. Unlocking that value requires a smarter, hybrid approach to IT infrastructure and storage—one that meets both today’s and tomorrow’s needs.

Generative AI accelerates the content explosion

Another key driver of the data explosion is GenAI, which is fuelling a boom in digital content creation. GenAI is democratising content production: employees across departments can now generate text, images and videos within minutes. This fundamentally changes workflows and introduces a new, data-driven reality for businesses.

The impact is already clear. Nearly three-quarters of businesses report that GenAI enables employees outside traditional creative roles to create content independently – for example, in sales, HR or product management.

This results not only in more content, but also in new formats that were previously too costly or time-consuming to produce, such as personalised videos, training materials or marketing assets. Over two-thirds of companies report an overall increase in content files, with faster production speeds and greater variety.

Many now create multiple versions of the same content to target audiences more precisely. At the same time, average file sizes are growing, and nearly half of companies are storing larger volumes of similar or redundant files, further increasing storage demands.

To keep up, many companies plan to retain their data for longer and are increasingly adopting data-tiering and archiving strategies. While a majority have already expanded or modernised their storage infrastructure, only one-third feel fully prepared for the demands of GenAI workloads today.

By 2026, it will become clear which companies have set the right course for sustainable data management, and which risk being overwhelmed by the content explosion.

Future-proof storage strategies will determine success in 2026

The content explosion driven by GenAI is both an opportunity and a challenge. Companies that align their storage strategies accordingly will benefit twice: they will unlock the full potential of AI-generated content while maintaining control over their data assets.

Data is becoming a strategic resource, as AI is transforming creativity, productivity and entire industries at an unprecedented pace. Businesses should treat every single byte of data as valuable, because it truly is.

This article is part of our DCR Predicts 2026 series. Check back next week for our final week with a new prediction every day.

DCR Predicts 2026
  •  

Microsoft chief admits AI boom could become a bubble without wider adoption

Microsoft chief executive Satya Nadella has warned that the AI boom risks turning into a speculative bubble unless adoption spreads far beyond big tech firms and wealthier developed markets.

Speaking at the World Economic Forum annual meeting in Davos on Tuesday, Nadella argued that the long-term success of the technology will hinge on whether it is taken up across a broad range of industries — and whether emerging markets can access the same productivity gains being claimed in the US and Europe.

“For this not to be a bubble by definition, it requires that the benefits of this are much more evenly spread,” said Nadella. He added that a “tell-tale sign” of a bubble would be if the upside remains concentrated among tech companies, rather than showing up in the performance of other sectors.

The warning lands as investment in AI infrastructure continues to accelerate, with governments, hyperscalers and enterprises pouring money into data centres, chips and new software tools — often on the promise that generative AI will unlock major gains in productivity. Nadella, however, suggested that the credibility of those claims will ultimately be tested outside the technology sector and outside the developed world.

For Nvidia, one of the big winners of the boom, chief executive Jensen Huang used his Davos appearance to argue the opposite case: that the industry needs even more investment, particularly to meet AI’s power demands, because benefits are already emerging across multiple sectors – a view that slightly contrasts with Nadella’s warning that the ‘proof’ must show up more widely.

That doesn’t mean Nadella is negative on AI. Quite the opposite: he maintained that he expects the technology to prove transformative, pointing to its potential role in scientific discovery and healthcare. “I’m much more confident that this is a technology that will… diffuse faster, and bend the productivity curve, and bring local surplus and economic growth all around the world,” he said.

Nadella’s comments were made during an on-stage conversation with BlackRock Chief Executive Larry Fink, who has been bullish on AI, with BlackRock involved in major investments in the space, including in the UK.

But public debate about whether AI is a “bubble” has continued to intensify, and recent commentary from influential figures has done little to quell those fears. Last year, Alphabet chief executive Sundar Pichai said the investment boom in AI had “elements of irrationality”, while the Bank of England has warned of a “sharp correction” in major tech firms should an AI bubble burst.

A key concern underpinning the debate is the uneven pace of adoption. While large multinationals and digitally mature economies have moved quickly to test copilots, automation tools and AI-enabled workflows, uptake is slower elsewhere – raising the possibility that productivity benefits could remain concentrated in richer markets, at least in the near term. Nadella’s message in Davos was that broad diffusion is not a nice-to-have: it is essential if AI is to underpin durable economic growth rather than a cycle of hype.

It is also why the question of who is expected to drive adoption has become a flashpoint. The attempt at Davos to frame AI’s success as something that ultimately depends on users and customers has not landed well with everyone.

On social media, some users rejected the implication that consumers bear responsibility for whether the technology delivers on its promise. One user on Reddit wrote, “That’s how you know that a product is good right? Not when it spreads organically, but when the CEOs have to keep sounding alarms and beg for more money, correct?”

The pushback comes at a moment when public frustration with generative AI outputs has been increasingly visible. Nadella drew criticism earlier this month after urging people to stop using the term “slop” to describe low-quality AI-generated content – a reaction that speaks to a wider trust problem that could itself slow the kind of broad-based adoption Nadella says is necessary to avoid an AI bubble.

  •  

Can nuclear keep the AI era online?

Adhum Carter Wolde-Lule, Chief Strategy Officer at Prism Power Group, explores how rising AI-driven demand is exposing grid constraints, and why SMRs could become a long-term route to reliable, low-carbon power for data centres.

The rise of artificial intelligence and high-density computing is driving an extraordinary surge in data centre power consumption worldwide. Each new generation of AI models requires more computational capacity, and therefore more electricity, than the last.

Globally, data centre energy use is expected to jump from around 460 TWh in 2022 to more than 1,000 TWh by 2026. In the UK alone, data centres already account for 1-2% of national electricity demand, a figure set to climb sharply as AI workloads ramp up.

This accelerating demand is putting intense pressure on already stretched power grids. In regions such as West London, capacity constraints are so severe that new developments have been told to expect no grid connection until the mid-2030s. As a result, power availability has become the number one concern for data centre operators, with more than 90% of industry professionals reporting it as a top challenge.

The central dilemma is clear: how can data centres guarantee 24/7 uptime while meeting environmental commitments, when neither existing grid infrastructure nor intermittent renewable energy can fully meet their needs?

AI, cloud and high-performance computing facilities often require hundreds of megawatts of constant power, they require the same energy as a small city. Grid operators around the world are struggling to cope.

Data centres cannot tolerate power interruptions, ensuring round the clock reliability is non-negotiable, yet renewable energy sources such as wind and solar are inherently variable. Battery storage can smooth short-term fluctuations, but even the best systems today only provide several hours of firm supply.

This has left many facilities dependent on diesel generators for emergency coverage, an arrangement that is both environmentally damaging and inconsistent with corporate net zero strategies. The growing gap between intermittent renewable generation and constant data centre demand is forcing operators to look for alternative, dependable sources of clean power.

One technology drawing increasing attention is nuclear power and specifically, Small Modular Reactors (SMRs). Unlike traditional gigawatt-scale nuclear plants, SMRs are designed to be built in factories, transported in modules and assembled on site. Most designs fall in the 50-300 MW range, making them far more flexible and suitable for industrial campuses.

SMRs offer a rare combination of carbon-free energy, a compact footprint and the ability to provide continuous baseload power at very high-capacity factors.

They can be located close to where power is consumed, potentially even adjacent to large data centre clusters, reducing reliance on strained regional grids and cutting transmission losses.

Tech giants are already positioning themselves for a nuclear-powered future. Amazon Web Services has invested in an SMR developer and is acquiring a nearly 1GW nuclear campus to support its cloud operations. Microsoft has hired nuclear specialists and agreed to procure power from the restart of the Three Mile Island reactor. Google has committed to using power from six planned SMRs by 2030. Large colocation providers like Equinix and Switch have also signed agreements with microreactor developers.

The UK government aims to play a leading role in the global SMR market. In 2025, through its Great British Nuclear initiative, the Government selected the Rolls Royce SMR which is a 470 MW modular reactor. Backed by £2.5 billion of funding, the ambition is to deploy at least three reactors by the middle of the next decade, forming a foundation for a revived domestic nuclear industry.

For data centre developers, SMRs offer the possibility of stable, clean baseload power that can be positioned close to major AI hubs.

However, major hurdles remain. Nuclear projects, regardless of size, must undergo rigorous safety, regulatory and planning processes, which means long lead times. With the first reactors not expected until the mid-2030s, SMRs cannot solve today’s capacity crunch.

Despite these challenges, the UK risks falling behind if it does not move quickly. Other countries are already advancing SMR programmes, and delays could push deployment further into the 2040s. Because SMRs are a long-term solution, data centre operators must focus on bridging the gap between today’s energy constraints and tomorrow’s nuclear options.

The first priority is improving efficiency. Advances such as liquid and immersion cooling, smarter workload scheduling and more efficient chip designs can significantly reduce power needs, easing pressure on both grids and on-site systems.

Next is building on site energy resilience. Many operators are investing in solar arrays, large-scale batteries, gas turbines and fuel cells to reduce grid reliance.

The industry should also engage in early-stage partnerships to test emerging technologies, including microreactors, advanced geothermal or hydrogen ready systems. Power purchase agreements for existing nuclear or hydroelectric energy can also immediately strengthen sustainability and reliability.

AI is reshaping global energy demand faster than traditional infrastructure can adapt. The combination of unprecedented loads, strict uptime requirements and sustainability targets means data centres must rethink how they source power. SMRs represent a promising long-term answer of clean, stable and deployable close to the point of use. But they will not arrive in time to solve immediate constraints.

Over the next decade, data centre operators will need a blend of efficiency gains, renewable integration, on site generation and strategic planning, while preparing to take advantage of nuclear technologies as they mature.

Those who combine near term pragmatism with long term vision will be best positioned to deliver the reliable, sustainable, always on digital infrastructure that the AI era demands.

  •  

Panduit names Holly Garcia as Chief Commercial Officer

Panduit has promoted Holly Garcia to Chief Commercial Officer, tasking her with leading the company’s global commercial strategy and customer-facing approach.

Garcia will report directly to Panduit President Marc Naese, with the appointment coming as the firm positions itself for growth across its electrical and network infrastructure markets.

“Holly has the vision and expertise to position our company for continued growth and success while deepening our customer relationships,” said Naese. 

“Holly’s leadership as Chief Commercial Officer will be instrumental in strengthening the customer experience and delivering the value our markets expect.”

Garcia most recently served as Vice President of Panduit’s Data Centre business unit, where she led growth and innovation initiatives and oversaw business strategy and new product introductions aimed at strengthening the company’s position in the data centre market.

Panduit said Garcia brings more than 25 years of experience across sales, marketing and business unit leadership.

“I’m honoured to take on the role of Chief Commercial Officer and excited to lead our commercial strategy during this time of growth,” explained Garcia. 

“Our team’s commitment to innovation and customer success has positioned us as a trusted partner globally, and I look forward to driving even greater value for our customers and stakeholders.”

  •  

DCR Predicts: The ‘gig economy’ is coming to data centres in 2026

Claire Keelan, Managing Director UK at Onnec, explains why project-based delivery models will become the backbone of new builds and upgrades in 2026, as traditional staffing struggles to match the pace and complexity of AI-led demand.

The data centre industry is constantly evolving. As AI workloads accelerate, operators are under mounting pressure to scale capacity while navigating skills shortages, infrastructure constraints and rising expectations around resilience. What worked a few years ago is no longer enough. Delivery models, workforce strategies and site design assumptions are all being tested.

In 2026, success will depend less on expansion and more on adaptability. Operators will need to rethink how projects are staffed, where capacity is built, and how existing assets are upgraded to meet AI demand. Flexible labour, broader talent inclusion, regional diversification and retrofitting will move from tactical considerations to strategic priorities.

The data centre ‘gig economy’ becomes backbone of delivery

Flexible labour models will underpin almost every new data centre project. Traditional staffing can’t scale at the speed AI demands. By 2026, flexible, crowdsourced, project-based teams will fill critical gaps across design, building, and operations. This shift isn’t about replacing expertise, it’s about redeploying it. Clear standards, accreditation, and safety frameworks will make flexibility viable at scale, turning part-time professionals and returning workers into a reliable, high-quality talent engine.

Women become central to meeting capacity targets 

With women making up less than 8% of the current workforce, the imbalance is holding the sector back. In 2026, diversity will shift from talking point to operational priority. This means targeted recruitment, retraining programmes, and mentorship networks designed to bring more women into engineering, safety, and leadership roles. Diversity will be treated as a business resilience issue, not just a social goal. This is because the industry can’t meet AI’s demands while sidelining a sizable portion of its potential workforce.

AI growth zones redraw the map

Regional ‘AI growth zones’ will emerge as the new engines of capacity. In 2026, Manchester, South Wales, and Scotland will continue to gain momentum thanks to lower land costs, renewable energy access, and close ties to academic institutions. This regional diversification will help balance power use and strengthen resilience against local constraints. The days of London and the M4 corridor as the single dominant hub are fading; the future of data centres is distributed, collaborative, and regionally connected.

Retrofitting becomes a reality check

With the UK home to one of the world’s largest portfolios of legacy data centres, over the next year operators must prove how fast they can innovate to stay ahead in the new AI landscape. 

In 2026, we’ll see a surge in retrofitted data centres as operators rush to upgrade legacy sites to meet soaring AI demand. Power and cooling will be complex, but cabling and network capacity will be the real bottlenecks. Poor-quality or overcrowded cabling limits density, throttles performance, and makes future upgrades almost impossible. 

Smart operators will invest early in high-grade structured systems that support modular expansion and long-term flexibility. ‘Retrofit-ready’ will become the new benchmark for responsible, future-proof design.

Looking into 2026

By 2026, the data centre sector will be defined less by how much capacity it builds and more by how intelligently it evolves. AI is compressing timelines, exposing fragility, and forcing long-term decisions into the present. Operators that treat this moment as a simple scaling challenge will struggle. Those that recognise it as a structural reset will set the pace.

Data centres are becoming critical national infrastructure for an AI-driven economy, and resilience will matter as much as raw performance. Leadership will belong to operators that move early, design for uncertainty, and embed adaptability. The question in 2026 is not who can grow fastest, but who can keep up when the rules keep changing.

This article is part of our DCR Predicts 2026 series. Come back every week in January for more.

DCR Predicts 2026
  •  

Multi-million-pound data centre electrical infrastructure upgrade completed at Heathrow Corporate Park, London

Managed IT service provider Redcentric has completed a multi-million-pound electrical infrastructure upgrade as part of a wider data centre refurbishment project at its facility located at Heathrow Corporate Park in London. 

The project, which included a UPS replacement, was part-funded through the Industrial Energy Transformation Fund (IETF), now closed for applications, which supports the deployment of technologies that enable businesses with high energy use to transition to a low-carbon future.

As part of Redcentric’s high-profile project, leading UPS manufacturer CENTIEL has delivered equipment to protect an existing 7 megawatts of critical load through its multi-award-winning, highly efficient, true modular uninterruptible power supply (UPS), StratusPowerTM. The deployment of this modular UPS technology enables Redcentric to scale to 10.5 megawatts without the need for any further infrastructure change.

The facility, which is popular with FTSE 100 companies, is now seeing UPS efficiency improvements from below 90% to above 97% since the upgrade. This represents the potential to reduce more than 8,000 tonnes of CO₂ emissions over the next 15 years, supporting ESG compliance for both Redcentric and its household-name clients.

Paul Hone, Data Centre Facilities Director, Redcentric, confirmed, “Our London West colocation data centre is a strategically located facility that offers cost-effective ISO-certified racks, cages, private suites, and complete data halls, as well as significant on-site office space. The data centre is powered by 100% renewable energy, sourced solely from solar, wind and hydro.

“In 2023, we embarked on the start of a full upgrade across the facility, which included the electrical infrastructure and live replacement of legacy UPS before they reached end of life. This part of the project has now been completed with zero downtime or disruption. At Redcentric, we pride ourselves on the high level of uptime across all of our data centres. The continuous reinvestment into new equipment to gain efficiencies now means we can continue to offer our valued clients a 100% uptime guarantee at our London West facility.

“In addition, for 2026, we are also planning a further deployment of 12 megawatts of power protection from two refurbished data halls being configured to support AI workloads of the future.”

Aaron Oddy, Sales Manager, Centiel, added, “A critical component of the project was the strategic removal of 22 megawatts of inefficient, legacy UPS systems. By replacing outdated technology with the latest innovation, we have dramatically improved efficiency, delivering immediate and substantial cost savings.

“StratusPower offers an exceptional 97.6% efficiency, dramatically increasing power utilisation and reducing the data centre’s overall carbon footprint – a key driver for Redcentric.

“The legacy equipment was replaced by Centiel’s StratusPower UPS system, featuring 14 x 500 kW modular UPS systems. This delivered a significant reduction in physical size, while delivering greater resilience, as a direct result of StratusPower’s award-winning, unique architecture.

“StratusPower provides an unprecedented level of resilience and availability, guaranteeing near-zero system downtime, ensuring the data centre can offer its clients the highest standard of power protection. Standardising with this design across the site enables UPS modules to be seamlessly redeployed between systems, ensuring maximum asset utilisation and operational agility.”

Durata, a leader in modular data centre solutions and critical power infrastructure, completed the installation to the highest standards.

Hone continued, “Environmental considerations were a key driver for us. StratusPower is a truly modular solution, ensuring efficient running and maintenance of systems. Reducing the requirement for major midlife service component replacements further adds to its green credentials.

“With no commissioning issues, zero reliability challenges or problems with the product, we are already talking to the Centiel team about how they can potentially support us with power protection at our other sites.”

  •  

BSI launches ‘Mark of Trust’ scheme for data centres

BSI has launched a new ‘Mark of Trust’ scheme designed to help data centre operators and their supply chains demonstrate that facilities and operations meet international standards for reliability, security and sustainability.

The scheme is positioned as a response to the rapid growth in global data centre capacity, which is being driven by AI and cloud computing, and the accompanying concerns around energy demand, water usage, supply chain resilience, regulatory compliance and the impact of new sites on local infrastructure and communities.

BSI says the Mark of Trust is based on international standards and is intended to provide a globally recognised way for operators to show alignment with best practice, particularly as emerging regulation and public scrutiny increases.

The standards and assurance body added that it has already certified BK Gulf LLC – an engineering, procurement and construction (EPC) contractor active in the UAE and wider Middle East market. BK Gulf received the mark in the ‘Availability and Protection’ module of the new scheme following a pilot phase.

David Mudd, BSI’s Global Head of Digital Trust Assurance, noted, “The promise of technology and in particular AI has never been greater, but it will not be realised without the necessary infrastructure sitting behind it. Tech companies face unprecedented operational, regulatory and reputational pressure as they try to meet the exponential growth and demand for data centers fuelled by the rise of AI. Organizations will now be able to meet these pressures head-on, while inspiring trust and confidence with clients, regulators and consumers that their facilities and operations meet global compliance and align with international best practice.”

A bid to reassure regulators and customers

The launch comes at a time when data centres are facing increasing pressure to evidence resilience and sustainability, with energy use and grid capacity frequently at the centre of planning debates. Operators in multiple markets are also contending with tighter expectations around cyber and physical security, alongside a growing focus on the provenance and robustness of critical supply chains.

BSI is positioning the Mark of Trust as a way for organisations to demonstrate compliance in a more consistent and recognisable way, rather than relying on fragmented or location-specific proof points.

The organisation also pointed to continued growth forecasts for the sector. It said the global data centre industry is expected to more than double from $242.72 billion to over $584 billion by 2032, with the number of hyperscale facilities forecast to roughly double every five years. In BSI’s view, that scale of expansion will only intensify scrutiny of how new facilities are designed, built and operated.

While the Mark of Trust has been framed around reliability, security and sustainability, BSI says the scheme is modular – allowing organisations to certify against specific focus areas, depending on what customers, regulators or local stakeholders are prioritising.

Andrew Butterfield, BSI’s Managing Director, Built Environment, added, “We’d like to congratulate BK Gulf LLC on certifying to the Availability and Protection module of the Mark of Trust, which demonstrates their leadership in industry best practice. We’re proud to be their trusted partner on this journey to driving innovation and excellence. BK Gulf LLC were among the first organisations to achieve the BIM Kitemark, and this latest certification further underscores their commitment to embracing international standards. BSI’s Mark of Trust will help organisations such as BK Gulf LLC, to build resilience, keep future-ready and secure an AI-future that works for all.”

What the Mark of Trust covers

BSI describes the Mark of Trust as an independent, globally recognised framework intended to validate technical, operational and compliance performance across data centre facilities and operations.

Two versions of the mark will be offered, one for facilities and another for services.

The framework is split into modules, with each one aimed at a specific challenge facing the sector. BSI says modules range from business continuity through to carbon usage and water management, enabling organisations to focus on the areas most relevant to their market and stakeholder expectations.

As priorities can vary significantly by region, particularly where grid constraints, water stress or planning environments differ, BSI says modules can be tackled in the order most appropriate for the organisation.

The body also says it will keep the scheme under review, with the number of modules and their requirements updated as the sector evolves.

  •  

Can AI make data centres greener, or will it simply make them bigger?

Peter Schwartz, Senior Technology Consultant at OryxAlign, explores how operators can use AI, modern cooling, and cleaner power to balance rising compute demand with genuine sustainability progress.

The swift integration of AI in sectors like healthcare and manufacturing has only increased pressure on data centre infrastructure. Energy consumption is high from the outset when training large models, remaining vast after deployment due to inference cycles.

The steady demand for AI adds persistent pressure on facilities. Already, data centres are moving workloads into large-scale cloud platforms (hyperscale) or mixed (hybrid) cloud set-ups. As more activity becomes centralised, power and cooling demands in these facilities grow. This prompts operators to identify solutions that support expansion while meeting sustainability goals.

The search for innovation

Innovative thermal and power handling strategies serve as one answer for operators. These advanced methods act in parallel to align environmental efficiency with increased compute density.

Liquid cooling, for example, which was previously associated with high-performance computing (HPC) deployments, is now used broadly across facilities in thermal management for high-density racks. Systems built with dielectric fluids or direct-to-chip water channels move heat more efficiently than air cooling systems, allowing for higher rack densities and a reduced burden on traditional air-handling units. These methods also make it possible to build compact facilities that need fewer mechanical parts compared to their counterparts.

AI also operates counterproductively, driving much of the sector’s energy demand. Yet, it also supports new smarter thermal controls which help to stabilise conditions and reduce energy consumption in dense compute zones. AI-driven cooling interprets sensor data to adjust environmental conditions in real-time, especially as workload intensity picks up. This approach reduces unnecessary cooling activity, allowing for precise environmental control across the facility, which is especially valuable for AI training zones that experience rapid shifts in thermal loads.

Sustainability gains also hinge on cleaner power sourcing. Power Purchase Agreements (PPAs) support operators in switching to renewable energy sources like solar, wind and hydropower, becoming popular investments to future-proof facilities. Some data centres are now built alongside renewable assets to cut transmission losses and gain clearer insight into their electricity’s carbon profile.

Alongside these strategies, interest has grown in on-site microgrids, battery energy storage systems (BESS) and hydrogen fuel cells. Such innovations provide cleaner power that lowers dependence on legacy grids powered by fossil fuels. But these solutions do not guarantee long-term scalability and viable costs, making them harder to access, especially for smaller organisations with less land and capital compared to hyperscale providers.

Driving change through cloud

Major cloud companies also influence sustainability efforts across the sector. Microsoft and Amazon Web Services operate at a scale that places them among the world’s largest electricity users, but it also positions them as prominent low-carbon advocates. Their procurement models, certification pathways and carbon neutrality commitments are setting expectations across the sector, for both colocation partners and new policy discussions.

These providers encourage transparency and accountability through open-source design work and shared framework promotion. Efforts like carbon-aware computing, where workloads shift to periods or regions with cleaner energy, indicate a move towards more digital infrastructure tuned for sustainable performance.

However, this progress from hyperscalers emphasises a divide across the industry. Since larger businesses secure large renewable energy agreements and invest in specialised cooling systems at a pace smaller businesses cannot match, sustainability becomes a competitive differentiator, rather than a common baseline to aim for.

Legacy over longevity

Progress towards a sustainable future and data centre expansion is also limited by the existing infrastructure. Many regions operate with legacy grids that are not equipped to support current growth patterns, and new grids face installation delays because of regulatory processes or aging network capacity. Such constraints have already led to development delays in countries like Ireland, where the gap between digital expansion and physical systems lies exposed.

Financial pressures also shape progress. While green technologies offer lower long-term expenses, the upfront spend for retrofits and renewable power agreements or advanced cooling is high. Cost differentiators are most notable in regions that focus on price/performance when making procurement decisions, because operators in these markets work with tight margins and limited incentives which outweigh the long-term gains.

These global differences only increase friction. Regions with cheaper carbon-intensive electricity or limited regulatory policies see fewer reasons to commit to sustainable upgrades, which produces uneven progress instead of a unified movement within the sector.

What comes next?

A future driven by green data centres depends on coordinated progress. Utility managers and grid operators need plans aligned with policymakers, and manufacturers that work closely with cloud architects to ensure data centres can grow while minimising environmental impact.

Innovations must also coexist with these changes. Hardware and facility design must be combined with software that can steer workloads, with increased value placed on accurate responses to environmental conditions. Demand for AI in these scenarios will increase, however it also offers tools that will support more efficient energy use and flexible load management.

We need green data centres for a digital economy aiming to grow without intensifying climate pressures. It’s a multifaceted route to sustainable development, but with shared commitment and targeted design and innovation, operators are given a realistic way forward.

  •  

Data centre planning applications rose 63% in 2025

Data centre planning applications hit a record high across England and Wales in 2025, as developers and investors raced to secure sites amid rising demand for AI and cloud compute.

That’s according to new analysis from City AM, which noted that more than 60 planning applications for new data centres were submitted in England and Wales during 2025. That represents a 63% increase compared with 2024.

It shouldn’t come as too much of a surprise that there has been a surge in data centre planning applications, especially as the industry splashes the cash as big tech fights over who will be number one in the AI race. Firms such as Google, Microsoft and OpenAI are all committing huge capex budgets to expanding their data centre portfolio, and the UK is seen as a key target for new data centres. 

What could be a surprise, however, is the fact that all the new applications cited by City AM were for new data centre developments and excluded extensions to existing sites, revisions to past applications, and wider mixed-use schemes that included a data centre component. That means the true volume of data centre-related proposals moving through the planning system is likely to be higher.

Dame Dawn Childs, Chief Executive of Pure Data Centres, told City AM, “With this AI bubble that everyone’s talking about…because of the increased valuations for powered land, everyone’s trying to get a piece of the pie, and that creates a bunch of fizziness.

“We’re seeing lots of people who are sending out on a daily basis: ‘we’ve got this significant plot of land with all of these megawatts of power in the middle of nowhere, it’ll be an AI gigafactory, buy it for a gazillion pounds’ – they’re absolutely trying to get increased valuations for scrappy industrial land.”

Childs said the strongest demand is being driven by AI-related applications from major hyperscalers, while adding that even without AI, the UK would likely have seen a notable rise in activity as cloud adoption accelerates across the wider economy.

Geographically, the South East continues to dominate. Around half of the applications were located in London and the South East – areas already seen as a European hotspot for data centre capacity because of connectivity, customer proximity, and established infrastructure.

That said, the analysis points to a broader spread of proposals beyond the traditional hubs. Seven applications were submitted in Wales during the year, along with another seven in the East Midlands, four in the North West and four in Yorkshire, suggesting developers are increasingly looking further afield as land and power constraints bite in the South East.

It’s not just the number of applications that is changing — it’s the type of sites being targeted. Developers appear to be getting more creative, with proposals to repurpose a wide range of existing brownfield assets into data centres. The examples cited in the analysis include an abandoned Mercure hotel site in Watford, the old Truman brewery earmarked for conversion in Hackney, a shuttered coal mine in Nottinghamshire, and a former landfill site in Chesterfield.

Given the demand for power many modern data centres now have, old power stations are also proving popular sites for hyperscalers. In fact, it was recently revealed that Amazon was prepping a brand-new data centre on the site of the former Didcot A data centre in Oxfordshire. Now it seems that project is just one of the many currently battling their way through the UK’s planning system. 

  •