Normal view

Received today — 4 April 2026 Data Centre Review

Legrand acquires TES as it looks for growth in data centre market

2 April 2026 at 11:24

TES has been acquired by Legrand, in a deal that gives the Northern Irish engineering firm access to one of the biggest names in electrical and digital building infrastructure.

The acquisition follows a period of rapid growth for TES, which has built its presence in the European data centre market as well as the UK and Irish utility sectors. The company said it had grown revenue to £72 million and expanded its workforce to around 300 employees.

It’s estimated that around 50% of TES’ revenue currently comes from the data centre sector, with the firm hoping to grow even further to capitalise on the rapid growth that is occurring in the industry in response to the rise of AI. 

Headquartered in Cookstown, County Tyrone, TES has recently expanded its manufacturing footprint with the opening of a 300,000sqft campus in County Derry. That additional capacity should be able to pump out more low-voltage power distribution equipment, ensuring that the company can keep up with demand. 

Earlier this year, Legrand pointed to the data centre market as a key driver of growth. While the firm has long been established in both the residential and commercial power market, it has been eager to compete with players such as Schneider Electric and ABB in the data centre sector. 

So far, its strategy has been paying off. Legrand reported that data centres accounted for 26% of its 2025 revenues, with it noting that the sector has the potential of accounting for 40% of its revenue in the future. This acquisition of TES should help it move towards that goal. 

As part of the deal, TES said it would continue to operate from its existing facilities in Cookstown and Derry following the acquisition, maintaining its focus on local job creation and its specialist divisions serving both the data centre and water utility markets.

Brian Taylor, CEO of TES, noted, “Joining Legrand is a landmark moment for TES. Over the past number of years, we have scaled our operations at an incredible pace, and this acquisition is a testament to the hard work and expertise of our entire team. Legrand’s global reach and market-leading position in the electrical sector provide the perfect platform for TES to further expand our international presence. We are excited to bring our bespoke engineering solutions to a wider audience while remaining deeply committed to our roots in Northern Ireland.”

Noel McCracken, Managing Director of TES, added, “Our mission has always been to provide innovative, high-quality engineering for critical infrastructure. With the support of Legrand, we can accelerate our investment in state-of-the-art manufacturing and continue to lead the way in both the water and power critical infrastructure markets.”

AI won’t be won in the server room alone

2 April 2026 at 08:10

Fabrizio Landini, Global Data Centre Segment Leader at Hitachi Group, explains why the AI boom will stall unless data centre operators finally close the gap between IT and OT.

Over a quarter of organisations (37%) still report little or no collaboration between IT and OT teams (Cyolo/Ponemon Institute). This divide made sense in an earlier era. IT focused on storage, networking and compute, whilst OT managed physical infrastructure like power distribution, environmental controls and cooling systems. The two departments rarely needed to align beyond simple capacity planning.

Now, AI has altered that dynamic. Modern AI platforms are compute-heavy, generating huge thermal loads that require flexible power allocation and real-time optimisation of cooling systems. When an LLM (large language model) is trained, every element of the data centre – from cooling output to network bandwidth to power draw – must respond in a coordinated way. That level of coordination is difficult to achieve if IT and OT systems remain siloed and unable to communicate.

Let’s take a look at why IT/OT convergence remains such a challenge for data centre operators, and why AI growth depends on more than technical integration between the two teams.

The data centre of the future

Data centres are no longer just rows of servers in climate-controlled rooms. They’re complex, dynamic ecosystems where digital workloads and physical infrastructure need to operate in close alignment. Yet historically, information technology (IT) and operational technology (OT) have been managed in isolation, with separate teams, tools and priorities.

IT/OT convergence addresses this challenge by creating a centralised data and control plane that spans both domains. It allows operators to view the facility as a single, integrated system, rather than as a set of disconnected components. The result can include faster response times, better resource utilisation, and a stronger foundation for AI-assisted operations.

The real challenge is cultural, not technical

The benefits of IT/OT convergence are well understood. Despite this, true convergence remains elusive for many operators. While there are technical hurdles to overcome, the more persistent challenge is often cultural. IT and OT teams can have different priorities and risk tolerances. IT teams may move quickly, be receptive to change, and prioritise flexibility. OT teams, on the other hand, typically prioritise stability and reliability, and may resist change that introduces operational risk. Both approaches have clear strengths, but finding common ground can be difficult.

Successful convergence requires building bridges between these cultures. This can include creating cross-functional teams, establishing shared metrics, and developing a common language that both IT and OT professionals can use. It also means investing in training so that IT professionals understand physical systems, and OT professionals understand digital networks.

On the technical side, IT and OT systems often speak different languages. IT networks run on standard protocols like Ethernet and TCP/IP. OT systems may use proprietary industrial protocols designed for specific equipment. Unifying the two requires middleware, protocol translation and careful integration work. Legacy OT systems can also be difficult to integrate with modern tooling. Building management systems (BMS), for example, can be a stumbling block for data centre operators, particularly where older deployments have limited connectivity or rely on proprietary protocols.

Practical steps to achieve IT/OT convergence

It’s understandable if data centre operators feel overwhelmed by these challenges, and uncertain about where to begin. A phased approach is typically more realistic than attempting full convergence at once.

1. Ensure real-time data flow

Before you can converge operations, you need unified visibility. This means connecting OT systems to a common data platform, standardising data formats, and enabling real-time data flow. Begin with non-critical systems to build confidence and support a test-and-learn approach, before moving onto mission-critical infrastructure.

2. Create centralised dashboards

Once data is flowing, develop visualisation tools that give both IT and OT teams a shared view of the data centre. This helps reduce information silos and makes interdependencies more visible.

3. Automate responses

With unified visibility in place, operators can begin automating responses that span IT and OT domains. For example, when a high-power AI workload starts, cooling output can be adjusted automatically, with relevant notifications sent to power management systems.

4. Enable predictive monitoring and maintenance

One of AI’s most useful capabilities is anticipating faults before they result in service-impacting failures, helping teams prioritise corrective actions earlier. The quality of these predictions depends on high-quality historical data, robust analytics and appropriate machine learning models – but where the foundations are in place, the operational benefits can be meaningful.

The road ahead

As AI-driven workloads increase over the next decade, IT/OT convergence may shift from a competitive differentiator to a prerequisite for resilience and continuity. Operators best positioned to succeed are likely to be those that close the gaps between digital workloads and physical infrastructure. Without effective system integration, data remains fragmented – limiting visibility, reducing the accuracy of analytics, and constraining the operational value that AI tools can deliver.

Importantly, this doesn’t need to be tackled all at once. Many organisations make progress incrementally, focusing first on visibility and data quality, then on automation and optimisation as confidence and capability grow.

The UK data centre power debate has a queue problem

1 April 2026 at 15:32

There are a lot of problems that the data centre industry in the UK has to contend with, whether it’s cooling, planning, power, or more recently public image. But if we break down all those elements, there may be a bigger issue at play – impatience. 

The industry is moving fast to capitalise on the hot new commodity of the moment – AI. Everywhere you look there’s a new AI feature being added to popular apps, or new AI companies launching promising to make people’s everyday lives easier. It’s easy to see why there’s this new wave of AI-in-everything, because that’s where the money is. 

Earlier this year, Gartner estimated that AI spending would top $2.5 trillion in 2026, and with that amount of money floating around – it’s bound to attract a whole swathe of people looking to get their payday. It’s also why you’re seeing more data centre developments than ever before, with new facilities being proposed on what feels like a daily basis. After all, how else are we going to enable AI if we don’t invest heavily in the infrastructure running it? 

That rush to deliver the promise of AI before the money runs out, is one of the reasons the industry is in the spotlight. After all, if you’re proposing to build 140 data centres, which is the number cited by Ofgem as currently in the connections queue, people will start to have questions. And those questions are almost certainly going to focus on power, because we all know AI is power hungry, and we also all know we’re currently living through an energy crisis. 

The queue is being mistaken for real demand

The problem is the debate about the industry’s power usage is being heavily distorted. That’s according to John Booth, DCA Advisory Board Member and Managing Director of Carbon3IT Ltd, who insists that we stop treating speculative projects in the grid connections queue as if they were firm future demand – and maybe, just maybe, take a deep breath and actually plan what we need. 

The distortion comes from the way headline figures are being repeated without enough scrutiny over what they actually represent. This is one of the issues I had with Carbon Brief’s headline, which read as a little on the sensationalist side, but it’s also something Booth has noted with the media’s representation of the industry’s power problem. After all, a project in the connections queue is not the same thing as a live facility, a committed build, or even a scheme with a guaranteed occupier. Yet too often, those distinctions are flattened out in public debate, creating the impression that every project is real, imminent, and destined to become a major new source of demand on the grid.

Booth’s argument is that this is where the conversation starts to go wrong. “The UK data centre operators have a very good handle on data centre construction projects and demands from their hyperscale and global clients, and they are progressing at pace to deliver their requirements,” he says. In other words, the established players aren’t rushing, they’re scaling appropriately to meet demand, but with money to be made in the market, new entrants are coming in who may not have as firm of a grip on realistic demand. The problem is that the wider pipeline is now being viewed as though it all carries the same weight and certainty.

That, according to Booth, is simply not true. “The majority of the ‘new’ projects announced over the past 2 years are property plays, i.e. identify a suitable location, obtain planning and power, and then flip to either an end client or an existing operator or hyperscaler.” 

Now, there’s nothing particularly shocking about that as a commercial strategy. Property speculation exists in every hot market. In fact, I grew up well aware of the property boom happening in Spain in the early 2000s, as investors flocked to build new luxury apartments and homes, hoping to make significant returns. Like those investors found out in the 2008 financial crisis, however, while AI is hot right now, there’s no such thing as a guaranteed return on investment – and that’s why there’s a serious problem when speculative activity gets folded into a wider story about what the country needs to power and build.

Risky speculation shouldn’t shape the narrative

Booth is blunt about the risks. “This is a very risky strategy, as power connections are stretching out to 2037 and new planning rules may require substantial re-design for future AI designs. We also have to believe the AI companies that there will be an actual requirement for this amount of computing power in the future, this is by no means a given.” 

Now, there are many people betting against AI. You just have to take to social media, or even the media in general to see people openly talking about the ‘AI bubble’ and if, or even when, it’s going to pop. While Booth is by no means anti-AI, he does raise an important point. 

The current AI boom has created a rush for position, and in a rush for position plenty of people will try to secure land, power and optionality long before they have secured certainty. And doesn’t that speak to why the debate currently feels so overheated? 

There is a tendency to look at a huge queue number, merge it mentally with the excitement around AI, and conclude that the UK is on the brink of an unavoidable power crunch driven entirely by data centres. But that is a lazy reading of a much more complicated picture. Some demand is real. Some demand is strategic. Some demand is speculative. Some projects will progress. Some will stall. Some will be sold. Some will be redesigned. Some will never make it past the stage of being a good idea on paper backed by the hope that someone richer arrives later.

It’s time to take a deep breah

That’s why for Booth, the answer is not to dismiss the issue, but to slow down and get more serious about what is actually needed – especially when it comes to planning something as complex as the energy grid. “The key point is to remove the speculative projects from the connections queue, take a breath, evaluate exactly what is needed from AI data centres, and build accordingly with a spatial strategy in mind.” 

That last point is especially important. If the UK is serious about AI Growth Zones, serious about supporting strategic digital infrastructure, and serious about avoiding the mistakes of fragmented development, then this cannot just be a race to connect everything, everywhere, all at once. It has to be a question of what should be built, where it should go, and what kind of power system and planning framework is needed to support it.

That in turn brings us back to patience. Not inertia, not delay for delay’s sake, but patience in the sense of discipline. The industry has money chasing AI, developers chasing sites, policymakers chasing growth, and the media chasing dramatic numbers. Under those conditions, it becomes very easy for everyone to talk themselves into a future that looks more settled than it really is. And once that happens, policy starts being shaped not by what is likely, but by what is loudest.

Booth argues that this is already happening. “There appears to be a lot of confusion within DSIT, Ofgem, The AI Energy Council and in the media with regards to current and future data centre energy capacity requirements,” he says. He is equally clear on the consequences of that confusion: “The media speculation surrounding data centre energy use and using flawed information does no one any good and we should wait for a concise plan to be developed by all the stakeholders, which is exactly what is happening right now.”

That is probably the most useful intervention here. The point is not that data centres do not need power, or that AI is not going to reshape infrastructure demand. It is that a speculative queue should not be mistaken for a national blueprint. If the UK wants to have a serious conversation about digital infrastructure, energy security and economic growth, it needs to start by separating what is real from what is aspirational, what is strategic from what is opportunistic, and what is genuinely urgent from what is simply being pushed forward by market impatience.

Because impatience is what sits underneath all of this. The impatience to capture the AI boom. The impatience to secure land and grid access before someone else does. The impatience to turn every large number into a headline. The impatience to build a narrative before a proper plan exists.

And that may be the biggest problem of all.

Data centre heat should be treated as strategic infrastructure

1 April 2026 at 11:36

Data centre waste heat is already abundant and predictable. That’s why Simon Kerr, Head of Heat Networks at EnergiRaven, believes the UK needs joined-up regulation, heat zoning, and early planning engagement to capture it at scale.

As artificial intelligence, hyperscale computing and cloud services fuel an unprecedented expansion in the number of data centres, there is an accompanying increase in the amount of waste heat produced by digital infrastructure. Harnessing this heat could help the UK strengthen energy security and support decarbonisation, provided the right frameworks and infrastructure are put in place.

Each facility produces a continuous, predictable flow of heat which, with vision and planning, could contribute to urban energy systems, reduce reliance on gas for space heating, and support grid stability at a time of rising demand.

Today, much of this heat is treated as a by-product, expelled into the environment and lost. But with the right infrastructure, a larger share of it could be used to supply homes and public buildings, and to support local heat networks.

With careful policy and national planning, each unit of low-carbon electricity could deliver more value – for example, once in a data centre for computation and again through useful heat in nearby buildings.

Learning from Scandinavia and charting our own path

Looking to our Northern European neighbours, Denmark and Sweden are demonstrating that heat reuse can work. Data centre heat flows into city-wide heat networks, reducing heating costs, gas consumption and exposure to volatile fossil fuel imports.

These outcomes are helped by alignment between policy, finance, governance and planning, creating an environment where connection is expected, investment is bankable and energy systems are coordinated.

However, it would be naive to assume the UK can simply copy Scandinavia. The UK is different: local authority powers are fragmented, heat zoning is inconsistent, and there is no obligation to consider heat recovery. The UK must develop its own blueprint, one that reflects our geography, regulatory landscape and existing infrastructure, and decide which approach will deliver the most practical results for UK citizens.

With ambition, joined-up planning and predictable funding, the challenge of cooling data centres could also become part of a broader energy opportunity.

How can we make this happen?

To get to a networked UK – where waste heat from data centre clusters in Slough, West London, Manchester and Edinburgh is fed into local heat networks to support homes and businesses – planning reform should treat data centres as strategic national energy assets. Every new facility should assess heat recovery potential, and early engagement with heat network developers should become routine.

Regulation should bring waste heat into mainstream energy policy, requiring large producers to report on and evaluate options to act on their waste heat potential. This should be supported by clear guidance from Ofgem, DESNZ and local authorities. Predictable frameworks for connection, supported by heat zoning, would reduce uncertainty for operators and help communities plan around available supply.

Meanwhile, establishing long-term capital frameworks and heat-purchase agreements would provide the commercial certainty required to accelerate adoption. This would encourage operators to treat heat as a managed output—valuable where there is a viable offtake route and a clear investment case.

To tie this all together, our mindset as a nation must shift towards seeing heat itself as a utility. This is already underway, with Ofgem set to start regulating heat networks from January 2026.

The practical barriers in our way

Integrating a new energy source at national scale is a daunting task, but it is something we have done many times before.

There are a number of measures we can take to realise this vision. We can task a central body with providing guidance to local authorities to help them build expertise; mandate that operators engage at the earliest stages of planning to enable cost-effective integration; and ensure “lessons learned” are collected and shared widely among all stakeholders.

We don’t need to look far to find examples of communities making heat recovery and usage work for them. Shetland Heat Energy and Power (SHEAP) is one example: by recovering heat from a local waste-to-energy plant, residents have benefited from reduced exposure to energy price shocks in recent years. The UK can overcome these challenges, but only with clarity and ambition. Early alignment of policy, planning and investment can turn heat recovery from a theoretical possibility into a deliverable, repeatable model.

A practical opportunity for operators

For data centre operators, heat reuse can create an additional revenue line in the right locations and, importantly, support decarbonisation objectives. Recovered heat can reduce cooling loads, improve ESG reporting, and strengthen investor confidence where delivery is measurable and contractual.

Early collaboration with regional heat networks can also improve project economics by aligning technical design, connection requirements and commercial terms from the outset. The operators best placed to benefit will be those that plan for heat export early, particularly in areas with dense heat demand and credible network development.

Operators who engage now may be better prepared as regulation evolves. Heat supply could become a stronger factor in planning decisions over time; planning early reduces risk and helps avoid costly retrofits.

Why the UK must act now

The stakes are high. Reusing data centre heat can reduce household heating costs, enable urban heat zoning strategies, and cut national gas demand – while supporting a rapidly expanding digital economy. As AI and cloud computing drive energy demand, aligning digital infrastructure with energy planning is a pragmatic opportunity that can be captured where the technical and commercial conditions are right.

The UK has the chance to turn a by-product into a useful local resource. By combining long-term vision with practical action, we can support a future where digital growth and decarbonisation can progress in parallel. The heat is already there – the question is whether we have the foresight to use it.

Equinix’s latest data centre in Dublin promises no additional grid strain

31 March 2026 at 13:43

Equinix has begun construction on a new data centre in Dublin, with the company planning to invest $78 million in the facility.

The new site, known as DB7x, will be built in Blanchardstown, close to two of Equinix’s existing Dublin data centres. It’s expected to offer retail IBX availability from early 2028.

Equinix already has a significant presence in Dublin, with six facilities currently operating in and around the city. What’s notable about all those data centres is that Equinix claims they’re all covered by 100% renewable energy, although its latest facility goes a step further. That’s because despite the furore around the impact new data centre developments have on the grid, DB7x is expected to not place any additional strain on the local energy grid. 

It’s important to note that Equinix’s DB7x data centre will still be connected to the grid, it’s just being constructed on an existing site and using the power that had already been allocated to that site. That’s not quite as significant as another data centre in Dublin, Ireland, which recently claimed to have Europe’s first microgrid, but Equinix has promised that the facility will be set up to be ‘100% flexible’, so that it can support the grid. 

Peter Lantry, Managing Director, Equinix, Ireland, noted, “This is an exciting development for Equinix’s operations in Ireland, as we celebrate 10 years of being in Ireland, investing in its infrastructure and economy. This announcement strongly supports the Government’s recently published Digital and AI Strategy, which outlines a path for keeping Ireland at the forefront of global digital innovation. It also reaffirms our commitment to Ireland and its importance to businesses worldwide.

“This is positive news for the Irish economy and we would like to thank the IDA Ireland for their continued support and collaboration to enable our sustainable growth in Ireland . By expanding colocation capacity in Dublin, we will enable domestic and international enterprises to scale, innovate, and connect across Equinix’s global digital infrastructure platform with ease.”

Anne-Marie Tierney Le Roux, Head of Technology, IDA Ireland, added, “Today’s announcement is a significant boost to Ireland’s digital infrastructure. Equinix’s continued investment demonstrates strong confidence in Ireland as a location for high-performance, sustainable data centre operations. This new facility will enhance the country’s connectivity, support the growth of AI and cloud services and further strengthen Ireland’s position as a leading hub for digital innovation and international investment.”

Confidence isn’t a women problem – it’s an industry challenge

31 March 2026 at 08:38

Lizzy McDowell, Director of Marketing at Kao Data, explores why confidence remains one of the biggest unspoken barriers for women in digital infrastructure, and how sharing real stories can help break the cycle.

I want to talk about something that doesn’t appear on any org chart, doesn’t feature in any job description, and is very rarely discussed openly in our industry: confidence. Or, more precisely, the quiet absence of it.

Last month, I wrote about why I launched Critical Careers and what we’re trying to achieve at Kao Data. The response honestly blew me away. But what really caught my attention was what people said to me privately, away from the comment sections and conference floors. Women in this industry kept telling me the same thing, in different words but with the same feeling behind it, ‘I have often wondered if I really belong here’. That may seem difficult to believe, but it’s true.

That stopped me in my tracks. Because these weren’t graduates or early-career professionals finding their feet. These were established, capable leaders: women running teams, managing multi-million-pound projects, advising boards. And they were still carrying that nagging voice in the back of their heads, telling them they weren’t quite enough.

The industry that rewards boldness

Let’s be honest about our sector for a moment. Digital infrastructure is built on bold decisions. Every data centre campus that breaks ground represents someone who backed themselves and their team, took a calculated risk, and went for it. Every power strategy, every land deal, every hyperscale contract won – these things don’t happen unless they’re underpinned by confidence.

Innovation in this industry demands it. You need confidence to challenge a design assumption. You need it to propose a new cooling approach that hasn’t been tried before. You need it to walk into a boardroom and tell the people holding the budget that there’s a better way to do things.

But here’s where it gets complicated. For women, developing that confidence often comes with an additional set of hurdles that our male colleagues simply don’t face.

The numbers behind the feeling

The data centre workforce is still overwhelmingly male. According to the Uptime Institute, women make up just 8-10% of data centre teams – a figure that has barely shifted in half a decade. In the broader technology sector, research from Hays found that 68% of women working in tech experience imposter syndrome, compared to 61% of men. And perhaps most tellingly, over a third of women in tech say those feelings of self-doubt have become more frequent as their careers have progressed, not less.

Digest that for a moment. The more senior women become, the more they doubt themselves. That’s not a personal failing. That’s a fundamental systemic signal.

When you work in an environment where you rarely see someone who looks like you in a leadership position – and a recent KPMG study found that 75% of female executives across industries have experienced imposter syndrome – it becomes easy to internalise the idea that you are somehow the exception rather than the evidence of what’s possible. A Vodafone study found that six in ten women said they would be more likely to apply for a role if they could see other women already in leadership positions. Visibility isn’t a nice-to-have. It’s a critical catalyst.

Confidence isn’t the problem. Context is.

I should be very clear: I don’t think women lack confidence because something is wrong with them. I think the environments many of us work in haven’t been designed – or haven’t evolved enough – for us to feel confident. There’s a difference.

When you’re the only woman in a room full of male colleagues, it takes a different kind of inner strength to speak up. When you’re pitching an idea and you can sense that the default assumption is scepticism rather than interest, it wears you down over time. When you look at the leadership page of most data centre operators and see row after row of similar faces, it’s hard not to draw internal conclusions about where the ceiling is.

I’ve felt this myself. There have been moments in my career where I’ve questioned whether I should be in the room, whether my perspective was valid, whether I’d earned my place at the table. And every single time, I was wrong to doubt myself. But the doubt didn’t come from nowhere. It came from the signals around me.

What stories can do

This is exactly why Critical Careers exists. Not to lecture. Not to produce another report that gets filed and forgotten. But to show women at all levels, through the real experiences of their peers, that they are not alone in feeling this way – and that those feelings don’t have to define their trajectory.

Libby Milne, a project manager at Buro Four working in the data centre sector, put it brilliantly when she spoke about what first drew her to the industry. It wasn’t a careers leaflet or a university open day. It was her dad taking her to a construction site. Something clicked because she could see herself in it. That’s the power of exposure. That’s what happens when someone opens a door and says, ‘This could be yours’.

Every woman featured in the Critical Careers book, on our podcast, and at our events has a version of that story. A moment where they found their footing, not because the doubt disappeared, but because they pushed through it with the support of someone who believed in them. Sometimes it was a mentor. Sometimes it was a peer. Sometimes it was simply reading about someone else’s journey and realising: if she can do it, maybe I can too.

From inspiration to action

Storytelling is so important, but stories alone aren’t going to cut it. The data centre industry is growing at a pace that outstrips almost every other infrastructure sector. Investment is unprecedented, demand is relentless, and the talent pipeline is struggling to keep up. We need more people. Full stop. And we won’t get them if half the population doesn’t see digital infrastructure as a place where they can thrive.

That means creating spaces where women can build confidence together, not in isolation. It means normalising the conversation around doubt and imposter syndrome so that it becomes something people address openly, rather than something they carry alone. It means ensuring that when a young woman looks at our industry for the first time, she doesn’t just see data halls and backup generators. She sees people she can genuinely relate to, doing extraordinary work.

Of course, it also means that the men in this industry have a role to play. Not as saviours, but as allies. Sponsoring women for opportunities. Amplifying their voices in meetings. Recognising when someone is being talked over and making space for them to finish their point. These things sound small and trivial. They’re not.

An invitation

If you’re reading this and you’ve ever felt like you don’t quite belong in this industry, I want you to know something: you do. Your doubt is not evidence that you’re out of your depth. It’s evidence that you care. And caring about doing good work is exactly what this industry needs more of.

Critical Careers will keep telling these stories. We’ll keep creating spaces for honest conversations. And we’ll keep pushing for an industry that doesn’t just talk about diversity, but builds the culture and foundations to support it.

Because confidence isn’t something you’re born with. It’s something you build. And sometimes, all it takes to start building is seeing someone else who’s already standing where you want to be.

See you next month.

Nscale latest to face public backlash over proposed data centre

30 March 2026 at 15:18

Nscale has become the latest company to face intense scrutiny over a proposed AI data centre in Essex, in the latest sign of the sector’s growing image problem.

The company plans to build a major AI data centre in Loughton, Essex, and had previously hoped to complete the project by the end of 2026. That timeline was always ambitious, given the skills shortage affecting the UK construction industry and the fact that the site is still being used as a scaffolding yard. But what could actually hold the project back is the industry’s old nemesis – planning. 

Although the data centre received outline approval in 2024, renewed public concern about the impact of such developments has reignited debate around the site. Planning officers at Loughton Town Council have now called for a fresh planning application to be submitted because of changes proposed by Nscale.

Loughton Town Council won’t make the final decision on the project, as that responsibility lies with Epping Forest District Council. However, the objections raised by Loughton’s officers focus on several key concerns.

Unsurprisingly, one of the main issues is power. Reflecting on the wider public anxiety about AI data centres, the town council’s planning officers said they were concerned about the strain the project could place on the local electricity grid. They have called on Nscale to provide further evidence on the development’s likely impact.

In a further objection to the scheme, the officers have also hit back at proposed changes by Nscale – which they say is 50% higher than originally proposed and includes 50% more internal capacity. The objection hinges on power again, however, with it noting that the “proposal would require more cooling and increased energy to facilitate this application.”

As part of their objection, it has asked Nscale to submit a completely new planning application. That could have a major impact on the company’s timeline. Nscale has already pushed back the expected completion date to early 2027, citing technology upgrades rather than the latest planning issues.

An industry under fire

AI is being heralded by the UK Government as an opportunity to boost economic growth, and Nscale has been one of the big success stories. The firm recently completed a funding round which valued it at $14.6 billion, and has also attracted high-profile board members including Nick Clegg and Sheryl Sandberg.

Despite Nscale’s growth and strong government backing for the sector, the wider data centre industry is facing significant headwinds. Opposition is growing across the country over the pressure data centres place on the power grid, at a time when electricity prices are already high and there are fears they could rise further as a result of Trump’s war in Iran.

While the industry is trying to push back against the idea that data centres are inherently harmful, it is also facing increasing resistance from local authorities. Edinburgh Council recently announced plans for a moratorium on new data centre developments in the city, and Nscale is now encountering fresh opposition from Loughton Town Council.

Nscale, however, doesn’t appear overly concerned by the latest backlash. In a statement to The Telegraph, a company spokesperson noted, “Site investigation and permit work is under way on the Loughton site, and we expect construction work to begin in the second quarter of 2026.

“While the schedule was recently updated to accommodate the installation of the latest Vera Rubin 200 technology, we expect the site to be operational in the second quarter of 2027.”

The build is the easy part – Day Two is the stress test for AI infrastructure

30 March 2026 at 08:40

Matt Salter, Data Centre Director at Onnec, outlines how sudden demand surges, thermal events, and component lead times force operators to prove resilience in real time, not on paper.

Constructing AI-ready infrastructure is only the first milestone in the journey to providing AI compute. The real test begins once the facility is operational, servers are installed and workloads go live.

Day One focuses on planning and construction: blueprints, power distribution, cooling systems, connectivity and redundancy. These are all measurable elements that make a facility ‘AI-capable’ on paper.

Day Two, however, introduces complexity and unpredictability. Thermal spikes, workload surges, equipment failures and supply chain delays quickly expose the gap between design assumptions and operational reality.

Day Two is when resilience moves from theory to practice. AI workloads are inherently volatile, and stress conditions often emerge only once systems are live. How well a data centre adapts, responds and maintains performance under pressure separates designs that succeed from those that falter.

AI workloads and infrastructure stress

AI workloads behave very differently from traditional enterprise or cloud computing. Dense GPU clusters generate concentrated heat and draw power in sudden surges, sometimes changing markedly within seconds. Industry commentary has increasingly highlighted how these dynamics can strain transformers and upstream electrical infrastructure, creating fluctuations that older data centres were never designed to handle.

Networking interconnects can also become saturated by unpredictable east-west traffic, while even small inefficiencies in cabling, containment or floor layout are amplified under load – creating hotspots and airflow bottlenecks that compromise performance.

Operating under these conditions is a far greater challenge than building the facility. Thermal events can arise abruptly, and misaligned cooling, power distribution or interconnect capacity can quickly lead to performance degradation or downtime.

Older facilities, designed for lower-density racks and slower-growing workloads, are particularly vulnerable. Even where redundancy exists, the intensity and volatility of AI workloads demand rapid, continuous response, leaving traditional monitoring and manual intervention insufficient.

Legacy infrastructure compounds these risks: many centres can’t support modern interconnect technologies such as InfiniBand, and industry incident analyses frequently link outages to preventable issues in cabling and cooling practices.

In AI-scale environments, engineering decisions on airflow, rack density and cabling quality directly influence whether a facility can maintain performance under sustained, high-intensity workloads.

Supply chains, maintenance and skilled operations

Infrastructure stress is only part of the picture. Supply chain constraints further complicate operations. Critical components such as GPUs, optical modules and cabling often have long lead times, and replacement can take weeks rather than days.

Even minor interruptions can escalate into significant operational issues if spare capacity, inventory management and contingency planning are not in place. According to the Data Centre Cost Index, 80% of operators report delays in manufacturing or delivery of essential equipment.

Shortages extend beyond GPUs; advanced fibre, switches and cabling are all in high demand, with multiple operators competing for the same scarce stock. Without timely access to the right components, even carefully designed facilities can struggle to maintain performance and execute planned upgrades.

Design choices and long-term resilience

Skills and process only go so far if the design limits operational options. Data centres must be engineered to be resilient and modular from the outset, because early design decisions often determine how effectively teams can deploy, monitor and maintain systems under real-world pressures.

Decisions made during design and construction have lasting operational consequences. Structured cabling, modular mechanical systems, spare power and cooling capacity, and flexible interconnect architectures all reduce the need for costly retrofits. Forward-looking design supports change without unnecessary disruption.

Starting early is vital, particularly when factoring in external constraints on designs that impact resilience. Labour shortages, regulatory changes, ESG compliance requirements and regional supply chain bottlenecks can all influence performance if not considered early.

In AI data centres, infrastructure and operations are inseparable: monitoring depth, operational runbooks and proactive planning are as important as the hardware itself. Facilities that embed these principles are better equipped to manage volatility, reduce downtime and maintain reliable performance even under extreme conditions.

Day Two defines long-term success

Building an AI-ready data centre is an achievement; operating one reliably under high-density, dynamic workloads is the true test. Day Two challenges assumptions about power, cooling, networking and staffing, revealing whether a facility can sustain AI workloads continuously.

Success is not measured by capacity on paper but by the ability to maintain uptime, handle surges and adapt in real time.

Where on-site coverage is limited, some operators use third-party on-site support (‘smart hands’) under tightly defined runbooks to execute urgent maintenance and fault isolation. The goal is speed and consistency: shorten time-to-diagnosis, reduce time-to-repair and keep changes controlled when conditions are already stressed.

As AI workloads expand across industries, Day Two operations will determine which facilities can scale, perform and remain resilient. The data centres of the future will integrate infrastructure, monitoring and operational strategy seamlessly, with proactive response embedded into everyday practice.

In the era of accelerated compute, the real test begins once the build is complete; it is on Day Two that long-term reliability is earned.

The Government got data centre emissions wrong – but that’s only part of the story

27 March 2026 at 11:50

The UK wants to be an AI powerhouse, with Chancellor Rachel Reeves even pledging that it will have the fastest AI adoption out of any G7 country. But if we want to actually have ambition meet reality, in other words, have more AI adoption, we need more compute. And if we want more compute, we need more data centres. And if we want more data centres, we need more land, more cooling and, above all, a lot more power. 

That is why Carbon Brief’s analysis this week hit such a nerve. It focused on a point that should already have raised eyebrows in Whitehall – the Government’s own numbers on data centre emissions looked far too low. And on that narrow but important point, Carbon Brief is right. 

The original DSIT Compute Evidence Annex said UK AI compute demand could reach 11.2GW by 2035 while associated emissions would still come in at just 0.025 to 0.142 MtCO2. That was laughably inaccurate, so much so the Government has already quietly updated the forecast. Not with a more accurate forecast, but with a note stating that it was intended to inform policy development rather than represent a final cross-government view, and that it’s now working on a more accurate assessment.

The Government’s numbers never really added up

You don’t need to be anti-data centre to see the problem. The basic logic is obvious enough. Huge amounts of AI compute require huge amounts of electricity, and electricity is only as clean as the system supplying it. The Government is right to talk about AI Growth Zones and faster infrastructure delivery, but it clearly got ahead of itself when it implied the emissions side of the equation would be negligible. If Britain is serious about scaling compute, then the power and carbon implications have to be taken seriously too.

It is not hard to see why the modelling ended up looking so convenient. The Government has staked a lot on AI as a driver of productivity and growth. It’s also made bold claims when it comes to its commitment to driving down carbon emissions and ensuring a greener grid. It is possible that officials simply assumed grid decarbonisation would move fast enough to absorb the coming wave of data centre demand. Maybe it still will. But that’s an assumption that still has to be proved, not wished into existence, and if we’re going to do forecasts – they should probably be based on probability. 

Ofgem’s own figures show just how big of a challenge it would be to get anywhere close to the UK Government’s previous forecasts. The regulator says there are around 140 data centres in the queue representing roughly 50GW of demand, including 71 projects amounting to around 20GW that have already reached financial commitment with final investment decision. Of course, not all of those data centres will come to fruition, but there are new projects proposed on a near daily basis – so the important thing to remember is that we’re getting a large new fleet of data centres regardless

Carbon Brief is right – but not about everything

I don’t want to get bogged down on Carbon Brief’s analysis being right or wrong, because it’s useful regardless, even if not flawless. That’s because it’s right to point out that the Government’s numbers looked too rosy, but it’s probably being a bit hyperbolic when it suggests that Britain is heading for a dirty, gas-fuelled data centre boom. In fact, just like the Government’s forecast, its ‘hundreds of times higher’ headline is based on a scenario, not a forecast. More specifically, it relies on cases where a meaningful share of future data centre electricity comes from gas. That is a perfectly fair stress test. It is not the same thing as saying this is the most likely outcome. Again, probability matters. 

There is also an important like-for-like problem in the comparison. Carbon Brief compares DSIT’s 11.2GW AI-only figure with Ofgem’s broader estimate that 71 mature data centre projects amount to around 20GW in the connections pipeline. Carbon Brief itself acknowledges that these figures are ‘not directly comparable’, because the Ofgem number is not specifically AI-only. That caveat matters. If the question is whether the Government’s modelling looked too low, the answer is yes. If the question is whether the sector is therefore heading for a fossil-fuelled free-for-all, the evidence is far less clear.

There is another point worth making here too. Backup generators are a real environmental issue, but they are not the same thing as routine power supply. Parliament’s latest POSTnote notes that generators are typically emergency systems, used infrequently and tested monthly, with regulatory limits on non-emergency run hours for larger installations. So it is important not to collapse every discussion of backup fuel into a claim that data centres are routinely running on-site fossil generation as their main source of electricity. That is not how most of the sector operates.

This is really a grid story

That is why the real story here is not whether data centres are ‘good’ or ‘bad’. It is whether Britain can build enough clean power, network capacity and system flexibility in the right places, quickly enough, to support the next wave of digital infrastructure. Before you argue that I’m simply defending data centres due to the fact that I write for a data centre publication, I can assure you I’m not. The industry can only get better if it’s held to account, but that doesn’t mean we can just keep adding fuel to the fire that simply states data centres are automatically bad. 

The public debate still swings too easily between two lazy positions. One says data centres are much cleaner than other forms of heavy industry and therefore should be given a pass. The other says they are climate villains in waiting. Neither is accurate. 

As Arcadis’ David Field argued recently, it is time to separate fact from fiction on data centre energy demand, because the sector only really makes sense when viewed in its wider system context. Data centres are energy-intensive, yes, but the real question is how grid capacity, phasing, cooling design and long-term energy planning evolve around them, not whether a single headline number can do all the work.

In fact, data centres will only be as low-carbon as the energy system, planning regime and technology choices around them allow them to be. That is why AI Growth Zones matter. The Government’s own response to the AI Opportunities Action Plan says these zones are supposed to offer enhanced access to power and support for planning approvals, while taking energy requirements into account with the National Energy System Operator. The AI Energy Council was also set up specifically to look at how AI and clean-energy goals can be delivered together.

The industry is already moving

And this is where the industry side of the story deserves more airtime than it usually gets. It’s not like the UK data centre sector is sitting still waiting to be told it has an emissions problem. It’s already well aware and is moving quickly on procurement, energy efficiency and backup power. 

Equinix, one of the biggest players in the space globally, says its UK data centres have 100% renewable energy coverage on a market basis, that all new UK sites since 2021 no longer use natural gas for heating, and that its Manchester 5 facility has used HVO for backup generators since 2022. It is also trialling lower-GWP refrigerants and says it is maintaining a focus on further PPAs and on-site low-carbon energy options.

Equinix is not alone. Kao Data says it procures 100% renewable energy on a market basis, reports an average estate PUE of 1.53, says it pioneered HVO for backup power in the UK, and has switched its Renewable Energy Guarantees of Origin (REGO)-backed supply to Dogger Bank from April 2025. VIRTUS says it has used purely renewable energy since 2012. Those are not magic fixes, but they do show the sector is actively working the levers it can control.

This is also where it is worth being honest – after all, I promised to hold the industry to account. Procuring 100% renewable energy on a market basis is not the same thing as saying a facility is physically running on zero-carbon electricity every hour of the day. Critics are right about that. Equinix’s own UK reporting shows market-based Scope 2 emissions at zero, while location-based Scope 2 emissions remain material. Kao Data reports the same distinction, saying its market-based Scope 2 emissions remain at zero because of REGOs while location-based emissions remain significant and may rise as the business grows. So yes, the grid still matters enormously. That is why it’s not an argument against data centres. It is an argument for getting the wider energy system right.

A plan is starting to emerge

There are also signs that the industry is trying to go beyond certificates and easy fixes. Kao Data’s deal with Downing Renewable Developments to build a 40MW solar farm for its Harlow campus is a good example. The project is designed to supply the campus with solar-generated electricity via a private-wire arrangement under a long-term PPA. It is not a whole-sector solution on its own, but it does point to a pattern of larger operators taking things into their own hands by being more direct with their renewable energy procurement and reducing the pressure on the grid, rather than simply complaining about it. 

We’re also seeing similar agreements from other operators, with SMRs seen by some in the industry as an ability for the sector to decouple from its reliance on the grid, while also reducing carbon emissions. While we’re still some way from seeing the first SMR deployed in the UK, Holtec International, EDF UK and Tritax Management have agreed to develop an SMR in Cottam, Nottinghamshire, for the express purpose of powering a data centre

There is also a broader framework taking shape. techUK says the current Climate Change Agreement for the sector requires a 14.5% energy-improvement target by 2030 against a 2022 baseline. The Climate Neutral Data Centre Pact, which many operators have signed up to, says electricity demand should be matched by 75% renewable or hourly carbon-free energy by the end of 2025 and 100% by the end of 2030. Again, these are not slogans. They are formal benchmarks against which the sector can be judged.

And if policymakers want examples from abroad, Ireland is already moving in a more explicit direction. Its regulator has decided that new data centres connecting to the electricity network must provide generation and/or storage capacity, and must meet at least 80% of their annual demand with additional renewable electricity projects generated in the Republic of Ireland. Given the scale of the power problems in Ireland, it could serve as a testbed for how to deal with a growing need for AI data centres without breaking the grid

The real question 

So yes, the Government probably did underestimate data centre emissions. Carbon Brief was right to say so. But it is too simplistic to jump from that to the conclusion that the UK is heading for a reckless, fossil-fuelled data-centre boom. The more accurate picture is messier, but also more constructive.

The official numbers need fixing. The grid needs to move faster. Siting decisions need to get smarter. And the industry needs to keep proving that its decarbonisation plans are real, measurable and not just market-based spin. But the outline of a plan is already there: cleaner procurement, lower-PUE design, cleaner backup fuels, more direct renewable deals, tougher sector targets and a bigger push to build where the power system can actually support growth. 

The real failure now would not be a lack of ambition from the industry. It would be a failure from the Government to match that ambition with an energy strategy capable of making it credible. 

Engineering cooling systems for the next generation of high-density data centres

27 March 2026 at 08:04

Pete Elliott, Senior Technical Staff Consultant at ChemTreat, argues that as rack densities soar, fluid chemistry, materials compatibility, and commissioning discipline will determine whether high-density cooling delivers reliability – or inefficiencies from day one.

The rapid rise of AI-driven workloads has pushed data centre cooling design into unfamiliar territory. Rack densities that once defined the upper limit of facility planning are now baseline assumptions, and traditional air-based cooling and heat-rejection approaches are struggling to keep pace. Direct-to-chip liquid cooling, immersion systems, and hybrid architectures are becoming core elements of modern mechanical design.

Yet the shift to liquid cooling introduces new complexities that go well beyond thermal performance. Mechanical design choices, fluid chemistry, and materials compatibility now play a decisive role in long-term reliability, commissioning success, and sustainability outcomes. For engineers tasked with designing or retrofitting high-density environments, understanding these interactions – and their impact on ever-tightening project timelines – has become essential.

Designing for heat transfer is only the starting point

The appeal of liquid cooling is straightforward: liquids transfer heat far more efficiently than air, allowing direct-to-chip systems to remove heat at the source and stabilise temperatures under extreme loads. In general terms, water provides far higher thermal conductivity than air, which can reduce reliance on large air-handling systems and enable higher rack densities within a smaller footprint.

However, thermal performance alone does not guarantee operational success. As power density increases, systems become less tolerant of variation. Minor changes in flow distribution, water chemistry, or material condition can have disproportionate effects on performance. Many of the issues that arise in liquid-cooled environments are mechanical or chemical in nature rather than purely thermal, which means early engineering decisions can significantly influence system reliability.

This is where disciplined design assumptions, consistent water quality from the start, and pre-operational system preparation matter most.

Mechanical design decisions that influence long-term performance

High-efficiency thermal management solutions rely on narrow channels, precision manifolds, and tight tolerances. These features improve heat transfer but increase sensitivity to fouling, corrosion, and flow imbalance, making materials selection for system components a key step in the design process.

Mixed-metal systems introduce galvanic corrosion potential that can be managed through considered design and water chemistry control. Copper, aluminium, stainless steel, and various alloys can coexist successfully, but their interactions should be anticipated from the outset. Electrically insulated junctions (dielectrics) can help mitigate galvanic effects. Treatment strategies may also be required to manage galvanically induced pitting, particularly where copper interfaces with less noble materials of construction such as aluminium or low-carbon steel.

Flow velocity presents another design trade-off. Excessive velocity can accelerate erosion and material wear, while insufficient velocity increases the risk of deposition and biofilm formation. Engineers should balance these forces while accounting for variable loads, particularly in hybrid environments where air- and liquid-cooled racks operate simultaneously.

Fluid chemistry as an engineering control variable

In liquid-cooled systems, water quality is not a background consideration; it directly influences system performance and longevity.

Parameters such as pH, alkalinity, conductivity, hardness, and dissolved oxygen affect corrosion rates and material stability. Suspended solids and microbial growth can obstruct cold plates and reduce effective heat transfer long before alarms are triggered. Unlike traditional cooling towers, where some variability can be tolerated, direct-to-chip systems typically demand tighter control and more consistent monitoring.

Effective mechanical design may involve incorporating filtration (often at tighter thresholds than conventional cooling systems), sampling points, and online monitoring into the system layout from the earliest design phases. Treating fluid chemistry as an operational afterthought increases the likelihood of post-commissioning failures that are difficult and costly to correct.

Commissioning and the hidden risk window

Many liquid cooling issues surface not during steady-state operation but at start-up and early commissioning. Construction debris, residual oils, and incomplete system cleaning can compromise performance from day one.

Effective pre-operational planning typically benefits from early technical consultation. Reviewing system materials, operating conditions, and anticipated thermal loads upfront supports the selection of an appropriate water-management approach and feed strategy, helping stabilise water chemistry during the commissioning period, when the system is particularly vulnerable.

Pre-operational cleaning and passivation also play an important role. Without them, even well-designed systems may experience accelerated corrosion or fouling that shortens component life. To preclude adverse conditions early on, it is important to create a detailed plan for a proper system flush, followed by cleaning and passivation. This means flush volumes and the time duration per flush need to be agreed upfront. Additionally, disposal of flushing fluid and cleaning solution requires discussion prior to commencing any of these pre-commissioning operations.

Commissioning also provides an opportunity to validate monitoring strategies, confirm flow balance, and establish baseline performance metrics.

Skipping these steps introduces uncertainty and limits an operator’s ability to respond proactively as workloads evolve.

Hybrid cooling architectures and operational flexibility

Few data centres transition entirely to liquid cooling in a single phase. Hybrid architectures that combine air and liquid cooling can offer a practical path forward.

Designing these environments involves careful integration between air systems, liquid loops, heat exchangers, and control platforms. Engineers should consider how thermal loads will shift as AI workloads expand, to ensure the infrastructure can adapt without major redesign.

Hybrid deployments also allow operators to test and refine water-management strategies before scaling further. Early implementation provides real-world data that can inform future decisions around chemistry control, filtration, and maintenance practices.

Sustainability through design discipline

Sustainability in high-power data centres is often discussed in terms of energy efficiency, but water use is becoming an equally important part of the conversation.

When it comes to water usage efficiency, closed-loop cooling circuits can offer clear advantages when properly designed and maintained. By minimising evaporation and discharge, these systems reduce overall water demand while improving thermal stability. Integrating reuse strategies such as air-handler condensate recovery or reclaimed water, where feasible, can further reduce environmental impact. Rainwater recovery has also been shown to improve water usage effectiveness in some deployments, even if it is used principally for on-site utility purposes.

The most effective sustainability outcomes are achieved when water-management goals are embedded into mechanical design rather than added later, as retrofits are typically more costly in the long run. Systems designed for stability, cleanliness, and long service life tend to consume fewer resources over time.

Collaboration as a risk reduction strategy

One of the most common challenges in high-density cooling projects is misalignment between mechanical design assumptions and operational realities. Early collaboration between mechanical engineers, materials specialists, and water-treatment experts can help reduce this risk.

Incorporating these disciplines into the design phase helps identify and address potential failure modes before construction begins. This approach can lead to more stable performance, fewer retrofits, and lower total cost of ownership.

Engineering for reliability in an AI-driven future

The next generation of data centres will be defined not only by how much compute power they deliver, but by how reliably and efficiently they operate under extreme thermal conditions. Liquid cooling can enable that future – but only if supported by thoughtful mechanical design and disciplined water management.

Engineers who treat fluid chemistry, materials selection, and commissioning as core design considerations will be better positioned to deliver facilities that scale with confidence and withstand the demands of AI-driven workloads.

Received before yesterday Data Centre Review

Campaigners mount last-ditch bid to block data centre at former RBS HQ

By:DCR
2 February 2026 at 13:34

Campaigners have launched a last-ditch effort to block a proposed data centre in Edinburgh, warning that the development’s back-up power plans could mean ‘100,000 idling cars-worth of diesel’ being burned if generators were ever run at scale.

Action to Protect Rural Scotland (APRS) published research ahead of a meeting this week where Edinburgh City Council is due to consider planning permission in principle for a green data centre at Redheughs Avenue in the Gyle area – which is the site of RBS’ former HQ. 

Edinburgh Council planning officers have recommended the project for approval, but campaigners have warned that the application does not take into account the full environmental impact of the site. 

Kat Jones, a Director at Action to Protect Rural Scotland, noted, “There is so much information missing from the application documents about the environmental impacts of this development. 

“The data centre will draw 210MW from the grid, which would power a quarter of a million homes, so a few low energy lighting solutions are neither here nor there. And that’s before we even start talking about the diesel generators.”

“If there were medals for greenwashing then these data centre developers are Olympic-level. The claims from the developer that this is a green data centre are obviously bunkum.”

“Diesel generators need to be testing and when you look at what is happening in the US, diesel generators are being used more as the grid becomes under pressure from the demand from datacentres due to their astronomical energy demands”

“This site is just upwind of the city centre, close to residential homes, and 220m from a Nursery. This is not something that should be happening with so little oversight – and without being required to do an Environmental Impact Assessment.”

The framing around the diesel generators can certainly sound scary – but as many in the data centre industry will know, they are primarily there as emergency back-up, rather than a primary power source. In fact, one of the key reasons that Scotland is being considered as an attractive location to site data centres is because of the high availability of power coming from renewable generation in the area. Scotland already produces much more power than it consumes, and data centres are hoping to tap into that surplus, and even potentially reduce the amount of curtailment that is required when the country is producing too much electricity. 

That doesn’t mean the backup generators won’t ever be required – grid issues can and do happen – but that can happen at many sites, whether it’s a hospital, factory or warehouse, and isn’t exclusive to data centres. Scotland also may have an abundance of power, but it still needs its grid reinforcing if it’s to use more of that power, something SSE and National Grid are keen to deliver.

But Dr Jones is keen to stress that data centres have already shown more regular use of diesel backup generators than other sectors, noting, “When you look at what is happening in the US, diesel generators are being used more as the grid becomes under pressure from the demand from data centres due to their astronomical energy demands.

“This site is just upwind of the city centre, close to residential homes, and 220 metres from a nursery. This is not something that should be happening with so little oversight – and without being required to do an environmental impact assessment.”

While Dr Jones is not wrong, it’s not exactly the same situation. US markets cited in these debates often face acute, localised capacity constraints and commercial incentives that can normalise generator operation beyond rare emergencies; Scotland’s system challenges are different. That doesn’t remove the central planning question – what happens if generators run more frequently than residents expect, and what conditions or assessments are in place to manage that risk? That will be up to Edinburgh City Council.

What the proposals actually entail

The proposed data centre would sit on the former campus of Royal Bank of Scotland, a large office complex originally constructed in the early 1990s and later demolished after staff relocation. Shelborn Asset Management bought the site in 2021, and the original buildings were demolished in 2022 following NatWest Group staff moving to Gogarburn. 

The developer later pivoted away from office-led plans and consulted on a campus featuring two data centre buildings of different sizes and a new on-site substation – which could help with power capacity and ensure the back-up generators remain turned off.

When planning officers assessed the site, they noted that the plans had ‘regard to the global climate and nature crises through re-use of brownfield land in a sustainable location’, while also adding that ‘it is not considered that the proposal will have a significant effect on the environment’. 

For councillors, the decision is whether to accept officers’ recommendation and approve the scheme in principle, or whether the questions raised over diesel back-up, local impacts and the absence of a formal environmental impact assessment justify holding the project back. We’ll find out when the council’s planning committee meets on Wednesday, February 4.

DCR Predicts: Can data centres become ‘good neighbours’ in 2026?

2 February 2026 at 08:18

Gareth Williams, Director, UK, India, Middle East and Africa Data Centres and Technology Leader at Arup, argues that 2026 should be the turning point for designing facilities that stabilise grids, steward water, and deliver visible community benefits.

2026 marks a pivotal opportunity to transform how data centres are seen in the public eye. Much has been done to change perceptions from anonymous ‘black boxes’ into strategic assets. Now we must ensure they are seen as positive partners for local energy, water and communities.

That means designing for reciprocity: centres that not only consume, but also stabilise grids, steward scarce water, create jobs, share heat, and leave biodiversity richer than before. This is what I see in briefs for clients, planners and operators alike: putting community benefit at the heart of developments, not as an afterthought.

Energy: from load to flexible, clean, locally useful power

AI-centric workloads are driving volatile, high-density demand, making efficiency gains harder. This is forcing smarter energy strategies, from chip-level liquid cooling and rack-level heat recovery to intelligent workload management.

We will increasingly see data centres act as energy hubs, with co-located renewables, multi-hour batteries, combined heat and power systems, and grid-service participation (frequency response, demand shifting) from day one. Pilot policies already treat facilities as grid allies, including heat-reuse quotas and flexible-access contracts. Operating models will increasingly shift compute to areas with surplus wind and sun — an approach that could also route non-time-critical training to regions with surplus energy.

Baseload energy supply options will mature unevenly. Some operators are testing power purchase agreements linked to small modular reactors to accelerate capacity. Others will combine hydrogen fuel cells for peak resilience with smart microgrids and local renewables. Regardless, the key is to offer two-way benefits: better uptime for operators and measurable support for national grid stability.

Water: design for scarcity, stewardship and circularity

Cooling demand will keep rising with denser compute. This can shift demand in some cases from air to liquid solutions, but the next step is water stewardship by design: closed-loop systems, immersion cooling where appropriate, and zero-freshwater ambitions in stressed catchments.

The Climate Neutral Data Centre Pact points to a water usage efficiency trajectory from ~1.8 L/kWh to 0.4 L/kWh in water-stressed sites by 2040. This is ambitious, but achievable if we switch to non-potable sources and track upstream and downstream impacts.

Practical levers for 2026 include site-level greywater reuse, recycled/industrial ‘brackish’ water sources, rainwater harvesting with sponge landscapes, and seawater cooling at coastal hubs — where environmental permissions and biodiversity management are designed from the outset. Singapore’s Green Data Centre Roadmap shows how regulation can drive cooling tower efficiency upgrades, blowdown recycling and cycles-of-concentration improvements that cut freshwater withdrawals at scale.

Community engagement: early, transparent, beneficial

Engagement still starts too late on many projects. Flip the sequence: begin with benefits, then shape the scheme around agreed outcomes. Practical packages include renewable partnerships that share surplus power; reuse district heat; build biodiversity corridors and accessible green space; offer fibre upgrades that lift local connectivity; and provide STEM education funding and jobs for technicians and landscapers.

Community-first design de-risks approvals and earns trust. These aren’t gestures; they increase value over the life of the campus. This ‘good neighbour’ lens is the fastest way to retire the ‘black box’ image and demonstrate tangible contributions to people’s lives.

Technology: intelligent management, edge resilience, advanced cooling

AI already plays a crucial role in enhancing operations, and it’s only getting smarter. One example is Digital Realty’s collaboration with Ecolab, which identifies real-time operational inefficiencies in cooling systems and recommends improvements to conserve water.

AI-powered management will become the operating system of next-generation facilities, actively orchestrating workloads, power and cooling to maximise efficiency. Intelligent monitoring will drive automation for predictive maintenance, spotting deteriorating components early and scheduling interventions without disrupting SLAs.

At campus scale, hyperscale modular architecture (standardised power and cooling blocks with repeatable controls) will enable capacity expansion and help manage AI surges. And at rack level, advanced liquid cooling systems (direct-to-chip and rear-door heat exchangers) will integrate with smart controls to maximise performance while minimising power and water use.

Materials: low-carbon, modular, designed for circular recovery

Measuring whole-life carbon is vital to managing the sustainability of buildings and critical infrastructure, including data centres. The materials brief should be explicit: certified low-carbon or recycled steel, geopolymer concrete where feasible, and engineered timber for appropriate architectural elements and shading. Envelope design, daylighting and thoughtful material selection can cut operational and embodied impacts while improving working environments.

2026 will see increasing design for disassembly and recovery: standardised rack aisles, traceable components, and procurement that favours reclaimed metals and remanufactured cooling equipment. We should expect to link digital asset plans with physical asset lifecycle strategies, ensuring that refresh cycles trigger material recovery instead of waste.

Acceleration: scale fast, standardise what matters, customise what counts

Large, out-of-town campuses with repeatable, prefabricated/containerised solutions are the only way to match AI demand responsibly. To make this happen, owners and operators will need to standardise the backbone (power blocks, cooling modules, monitoring stacks), then customise for local energy and water contexts.

Reduced bespoke engineering means faster approvals, lower risk, and clearer community commitments (heat and water reuse, biodiversity) baked into template designs. Energy policies that treat campuses as anchor tenants and reward flexibility services will further cut delivery timelines while raising public value.

Conclusion: a systems brief

This is the year to design data centres as reciprocal systems: energy hubs that stabilise grids and disclose 24/7 clean sourcing; water stewards that minimise freshwater draw and close loops; and neighbours that fund skills, share heat, and leave landscapes better than before.

With multidisciplinary teams and a place-first brief, owners and operators can move from compliance to contribution — engineering facilities that are engines of local resilience and global compute. If we build them this way, the sector will be remembered not for what it consumed, but for what it enabled.

This article is part of our DCR Predicts 2026 series. The series has now offficially concluded, you can catch all the articles at the link below.

DCR Predicts 2026

LFB Group rebrands data centre division as Apx

30 January 2026 at 12:03

LFB Group’s dedicated data centre division has rebranded to Apx, in a move the company says reflects the “complexity, pace and performance expectations” now defining the European data centre market.

The rebrand comes as operators and developers grapple with rising compute intensity, with AI deployments pushing rack densities higher and putting greater scrutiny on cooling performance and delivery timelines. In that environment, Apx says closer collaboration earlier in the design and build cycle – including co-engineering and pre-commissioning – is becoming increasingly important.

The name should also feel familiar. Apx has already been used by LFB Group before – with it naming an entire cooling infrastructure product series after it. Now, however, that name is going to be expanded to the whole division.

Apx will feature the familiar dedicated team from LFB Group, which was previously part of Lennox, so the experience that the company has gathered over the last 20 years will continue to be there – just under a new name. 

Why has LFB Group rebranded its data centre division to Apx? 

Given its established position in the market – why the rebrand? Well, the company says that Apx is all about market positioning. Not only has the company recently debuted three new products, but the company is keen to capitalise on the explosive growth that is occurring in the data centre market – especially in Europe. 

The company is positioning its strength on the pre-commissioning and early validation work, with capabilities it describes as spanning precision manufacturing, automated testing and climatic validation.

Matt Evans, CEO at Apx Data Centre Solutions, argued that the ability to validate performance earlier has become a differentiator as large projects are announced at pace. He noted, “The industry’s dams have well and truly burst, with billion dollar projects and developments being announced almost every week. Keeping on top of this demand though, has never been more important.

“Today, collaboration is everything. Operators are searching for partners who can offer them both flexibility and agility, enabling them to build for the future while reacting quickly to what’s happening right now. That’s where co-engineering becomes critical; by working with designers, contractors and operators from day one, we can shape decisions together, anticipate challenges and engineer solutions before they become problems.”

Evans added that front-loading engineering work is intended to reduce uncertainty once equipment reaches site. He continued, “While no one can predict what’s around the corner, one thing is clear: performance has to be proven earlier. It’s been one of our grounding principles since the start; the idea that pre-commissioning must be core to every product’s DNA. By front-loading engineering, validating performance up-front and removing uncertainty before components reach sites, we give operators the head space, and time, to meet the demand.

“The direction of travel is clear: scale, capacity and density. And I couldn’t be more excited about where we’ve taken this business. The new Apx name marks our next chapter, and it’s one we’re genuinely proud to be part of.”

While it has a new name, Apx will continue to sit within the wider LFB Group, which also includes HVAC specialist Redge and refrigeration business Friga-Bohn. The group says this structure provides industrial-scale manufacturing support and engineering expertise across refrigeration and mechanical disciplines.

Alongside the branding change, Apx is also expanding headcount. The company said it will recruit across project management, operations, controls, commissioning and sales support roles in France, Germany and the Netherlands. By 2027, its dedicated data centre team is expected to reach around 50 employees.

DCR Predicts: Is data sovereignty about to trigger a cloud rethink?

30 January 2026 at 08:09

With regulators and boards paying closer attention to where sensitive data sits, Fred Lherault, Field CTO EMEA/Emerging Markets at Pure Storage, outlines why hybrid strategies and selective cloud repatriation are likely to accelerate as AI scales.

After two years of accelerated AI experimentation, rising expectations, and rapid vendor expansion, I believe 2026 will mark an important inflection point for organisations building modern data infrastructure. Many enterprises are now moving past the initial hype cycle and focusing on what is required to operationalise AI reliably and at scale.

That shift is already visible across customers evaluating how AI will integrate into production workflows. If we extrapolate from these trends, several themes are likely to influence how organisations design their data pipelines, storage architectures, and cloud strategies in the year ahead. The following reflects my perspective on how these dynamics may unfold.

From hype to production: data readiness and inference become the priority

While some organisations are still convincing themselves how essential AI is, most are now realistic about what they do, and, crucially, do not deploy. The switch in focus from training to inference means that, without a robust inference platform, and the ability to get data ready for AI pipelines, organisations are set to fail.

As AI inference workloads become part of the production workflow, organisations will have to ensure their infrastructure supports not just fast access, but also high availability, security, and non-disruptive operations. Not doing this will be costly, both from a results perspective, and an operational one.

However, most organisations are still struggling with the data readiness challenge. Getting data AI-ready requires going through many phases, such as data ingestion, curation, transformation, vectorisation, indexing, and serving. Each of these phases can typically take days or weeks, and delay the point when the AI project’s results can be evaluated by the business.

Organisations who care about using AI with their own data will focus on streamlining and automating the whole data pipeline for AI – not just for faster initial results evaluation, but also for continuous ingestion of newly created data, and iteration.

This remains one of the most significant barriers to AI adoption. Enterprise data is often dispersed across legacy systems, cloud environments, and archives, which makes it difficult to access and prepare at the speed AI workflows require. In 2026, we can expect this challenge to become more pronounced as organisations look to extract value from all of their data, regardless of location. Manual preparation will not scale to meet these requirements. Automated pipelines, richer metadata, and integrated data platforms will become essential foundations for organisations aiming to use AI with continuous, repeatable outcomes.

AI and data sovereignty will reshape cloud strategy, and accelerate selective repatriation

The dual issues of AI and data sovereignty are driving concerns about where data is stored, and how organisations can maintain trust, and guarantee access in the event of any issues. In order to extract value from AI, it is critical for organisations to know where their most important data is, and that it is ready for use.

Concerns about data sovereignty are also driving more organisations to reconsider their cloud strategy. Rising geopolitical tensions and regulatory pressure will shape nations’ data centre strategies in 2026 in response. Governments, in particular, want to minimise the risk that access to data could be used as a threat or negotiating tactic. Organisations should be similarly wary, and prepare themselves.

We are already seeing early indicators of this shift. Boards and regulators are paying closer attention to where sensitive and strategically important data resides, driven, in part, by evolving regulatory frameworks such as GDPR, DORA, and guidance emerging from the EU AI Act. This scrutiny is prompting many organisations to reassess cloud strategies that once prioritised cost or convenience over sovereignty and resilience.

As a result, hybrid models are likely to expand, with more AI-critical datasets and workloads positioned closer to where they can be governed, audited, and controlled. This is not a retreat from the cloud, but a more deliberate, workload-specific leveraging of it.

KubeVirt will scale into mainstream production

The recent changes to VMware licensing that followed Broadcom’s acquisition have kickstarted a conversation around alternative approaches to virtualised workloads. KubeVirt, which allows management of virtual machines through Kubernetes, provides one such alternative—a platform that encompasses both virtualisation and containerisation needs—and I expect it will take off in 2026.

The KubeVirt offering has matured to the point where it is suitable for enterprise needs. For many, moving to another virtualisation provider is a huge upheaval, and, while it may eventually save money, it always comes with a set of limitations and constraints, especially when it comes to everything that surrounds the virtualisation platform (data protection, security, networking, and so on).

KubeVirt enables organisations to leverage the growing Kubernetes ecosystem to more quickly realise the value in a platform which provides the capabilities to manage, orchestrate, and monitor not just VMs, but also containers, regardless of how the proportion of those evolves over time.

KubeVirt’s momentum reflects a broader shift in how organisations want to operate their infrastructure. As containerisation becomes standard and AI workloads scale, many teams are looking for a unified operational model that reduces complexity, and avoids long-term platform lock-in. Consolidating virtual machines and containers under a single control plane aligns with this direction.

If adoption increases as predicted, storage and data services will evolve in parallel, with greater demand for persistent, low-latency, Kubernetes-native storage that can support mixed-workload environments.

2026 will be about discipline, not disruption

If the past two years have been defined by rapid disruption, driven largely by AI, 2026 is likely to be a year where organisations prioritise the operational foundation required for long-term success. Enterprises will:

  • Move from AI experimentation to consistent, production-grade inference models
  • Modernise data pipelines to support continuous data readiness
  • Reassess cloud strategies with a sharper focus on sovereignty, governance, and resilience
  • Evaluate VMware alternatives, such as KubeVirt, which support a unified approach to virtual machines and containers

The organisations able to take these shifts in their stride will be best placed for success in 2026.

This article is part of our DCR Predicts 2026 series. The series will officially end on Monday, February 2 with a special bonus prediction.

DCR Predicts 2026

Saudi Arabia pivots NEOM ‘gigaproject’ to AI data centre hub

By:DCR
29 January 2026 at 15:47

Saudi Arabia is reportedly preparing to scale back NEOM, its marquee ‘gigaproject’ on the Red Sea, with it instead looking to develop an AI data centre hub instead.

According to unnamed sources cited by a report in the Financial Times, Saudi Arabia will scale back its hugely ambitious NEOM megaproject to create a new livable region in the desert in the northwest of the country, on the Red Sea coast. The project was announced in 2017 by Crown Prince Mohammad Bin Salman and was a cornerstone of his Vision 2030. It was to cover about 26,500 square km, roughly the size of Belgium (see map below).

The image above, from 27 October 2024, shows Sindalah, a luxury island destination and the first physical showcase of NEOM.

image

NEOM was due for completion in 2030 and included plans for a city called The Line – a row of 500m tall skyscrapers stretching for some 200km. However NEOM suffered many delays and cost overruns, as well as criticism for potential environmental damage and being unrealistic, among other things.

In addition, Saudi Arabia is hosting the Expo international trade fair in 2030 and the football World Cup in 2034, both of which involve large scale investment. Work on NEOM was paused in 2025 while the government looked at its options in a year-long review which is scheduled to conclude in this quarter.

According to a report in the Financial Times, focus for the region will be more on industry, such as becoming a hub for data centres. Its location means sea water can be used for cooling and the Crown Prince is keen to make his country a leader in AI infrastructure – a hub for data centres to power AI – to attract inward investment and high profile international partners.

An unnamed source cited by the FT said the location had other advantages too, such as digital infrastructure and its position at the crossroad of three continents (Africa, Asia and Europe), plus almost limitless renewable energy and available land.

It’s not the first time NEOM has been touted as potentially playing host to data centres, with DataVolt committing $5 billion DataVolt to develop a new 1.5 GW net zero AI campus at NEOM’s Oxagon. That was expected to come online in 2028, but it’s unknown if it’ll be impacted by the planned rethink for the NEOM area.

This article originally appeared on Mobile Europe, with additional commentary from Data Centre Review.

Lanarkshire becomes Scotland’s first AI Growth Zone, UK’s fifth

29 January 2026 at 14:40

Lanarkshire has been named the UK’s latest AI Growth Zone, with the UK Government backing a major expansion around DataVita’s data centre site in the area. 

This is the first AI Growth Zone located in Scotland, which has long been positioned as an ideal area to host one – given the abundance of renewable power that is available in the region. The Scottish Government has also been keen to promote the area in hopes of developing it into a leading zero-carbon, cost-competitive green data centre hub. 

The Lanarkshire AI Growth Zone, which is the fifth AIGZ to be announced, is set to be based around DataVita’s campus, with the Scottish data centre firm delivering the site in partnership with AI cloud provider CoreWeave. That’s slightly different from other sites, which have often been positioned around multiple data centre operators, such as the North East Growth Zone, which is being centred around expansions to existing campuses from Cobalt Park Data Centres and the QTS Cambois. 

Despite being centred around the one expanded campus, the UK Government still has big hopes for the site. In fact, it’s hoped that the site will bring more than 3,000 jobs to the area over the coming years, including 50 apprenticeships. Around 800 roles are expected to be higher-paid AI and digital infrastructure jobs, spanning everything from research and software to permanent staff running and maintaining data centres, with the remainder tied to construction and site development.

Alongside job creation, ministers are pointing to £8.2 billion of private investment, plus a community fund worth up to £543 million over the next 15 years, which the Government says will be raised as data centre capacity comes online.

What’s being built as part of the Lanarkshire AI Growth Zone

The Lanarkshire AI Growth Zone may be centred around DataVita and CoreWeave’s partnership, but that doesn’t mean it’s just a single facility. To the contrary, the site is expected to feature 100MW of AI-ready data centre capacity, over 1GW of renewable energy infrastructure connected via private wire, and ‘Innovation Parks’ intended to attract adjacent industries that want proximity to large-scale compute.

That extra power will be key to the deployment of this latest AI Growth Zone, with it seen as a key tenet of gaining the designation, but it should also go some way towards helping reduce public opposition. Another data centre located to the south of Glasgow in Hulford has seen intense local opposition due to its enormous power demands, with residents outraged that the site wouldn’t even need to calculate the environmental impact on the local area. 

DataVita and CoreWeave will be keen to avoid the same backlash – which is why the companies are leaning heavily on a whole host of sustainability claims for its Lanarkshire AI Growth Zone. As well as using renewable energy to help power the site, the two firms also plan to make use of waste heat. 

The current plan is that excess heat from cooling systems could, in time, be redirected to support the nearby University Hospital Monklands, described as Scotland’s first fully digital and net zero hospital – though that element is presented as something to be explored once the site is fully up and running, rather than a guaranteed near-term deliverable.

That would be a huge win for advocates of heat networks, with a recent report suggesting that waste heat from UK data centres could heat 3.5m+ homes – it could also help the site win favour with local residents who are impacted by the plans. 

It’s not the only part of the plan that has been developed in a bid to win over residents. In fact, a community fund – worth up to £543 million over 15 years – will also be set up to support local programmes ranging from skills and training packages through to after-school coding clubs and support for local charities and foodbanks. 

DataVita’s parent company, HFD Group, is also expected to contribute £1 million per year to local charities and community groups, on top of the Growth Zone community funding mechanism.

Industry reaction

Commenting on plans for the first AI Growth Zone in Scotland, the UK’s Technology Secretary Liz Kendall noted, “Today’s announcement is about creating good jobs, backing innovation and making sure the benefits AI will bring can be felt across the community – that’s how the UK government is delivering real change for the people of Scotland.

“From thousands of new jobs and billions in investment through to support for local people and their families, AI Growth Zones are bringing generation-defining opportunities to all corners of the country.”

Danny Quinn, Managing Director of DataVita, added, “Scotland has everything AI needs – the talent, the green energy, and now the infrastructure. But this goes beyond the physical build. We’re creating innovation parks, new energy infrastructure, and attracting inward investment from some of the world’s leading technology companies. 

“This is a real opportunity for North Lanarkshire, and we want to make sure local people share in it. The £543 million community fund means the benefits stay here – good jobs, new skills, and investment that actually reaches the people who live and work in this area.”

Schneider Electric’s Matthew Baynes, VP, Secure Power and Data Centres, Schneider Electric, UK & Ireland, concluded, “In the twelve months since the introduction of the AI Opportunities Action Plan, the UK has seen much progress towards its AI ambitions.

“The new AI Growth Zone (AIGZ) announced today in Lanarkshire demonstrates just how far the country has come in its plans to build a sovereign AI nation, with Scotland becoming a critical new infrastructure hub and joining those in Wales, Oxfordshire, and the Northeast of England.

“Furthermore, the country has now secured more than £31B in investment from some of the world’s largest, leading tech companies, demonstrating that the UK has the people, resources and ambition to make AI a centrepiece of a new and revitalised Industrial Strategy.

“While this can be considered a success in many respects, there is still much work to do. Access to renewable power remains one of the biggest hurdles facing many parts of the country, and as the UK’s energy technology partner for data centres and AI Infrastructure, we believe there is a clear opportunity to catalyse the both the AI and green transitions by turning data centres into the energy centres of the future – fast-tracking new developments with behind-the-meter power generation and microgrids.

“Furthermore, the AIGZ announced today could not be more timely. We believe Scotland, with its cool temperate climate and rich conditions to generate renewable energy, provides a key opportunity to create secure, scalable and sustainable infrastructure capable of galvanising the AI race. Now, the UK’s sustainability and AI ambitions must work together hand-in-glove, demonstrating that today’s technology can be a catalyst for a greener future, powered by AI.”

DCR Predicts: The new bottleneck for AI data centres isn’t technology – it’s permission

29 January 2026 at 08:23

As gigawatt-scale sites move from abstract infrastructure to highly visible ‘AI factories’, Tate Cantrell, Verne CTO, argues that grid capacity, water myths, and local sentiment will decide what actually gets built.

The industry in 2026 will need to get ready for hyper-dense, gigawatt-scale data centres, but preparation will be more complicated than purely infrastructure design. AI’s exploding computational demand is pushing designers to deliver facilities with greater density that consume a growing volume of power and challenge conventional cooling.

The growth of hyperscale campuses risks colliding with a public increasingly aware of power and water consumption. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to accept.

A growing public awareness of data centres

The sector has entered an era of scale that would have seemed implausible a few years ago. Internet giants are investing billions of dollars in facilities that redefine large-scale and are reshaping the market. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services.

But their impact extends beyond the data centre industry: the communities hosting these ‘AI factories’ are being transformed, too.

This is leading to engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with power distribution systems and cooling infrastructure. As these sites become more visible, public awareness of the resources they consume is growing. The data centre has become a local landmark – and it’s under scrutiny.

Power versus perception

Power is one area receiving attention. Data centre growth is coinciding with the perception that hyperscale operators are competing for grid capacity or diverting renewable power that might otherwise support local decarbonisation. There is no shortage of coverage suggesting data centres are pushing up energy prices, too.

These perceptions have already had consequences. In the UK, a proposed 90 MW facility near London was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete for electricity with what one campaign group leader called “power-guzzling behemoth”. In Belgium, grid operator Elia may limit the power allocated to operators to protect other industrial users.

It would not be surprising to see this reaction continue in 2026, despite the steps taken by all data centre operators to maximise power efficiency and sustainability.

Cool misunderstandings 

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100kW. The amount of heat produced in such a dense space exceeds the capabilities of air-based cooling, driving the move to more efficient liquid systems.

Yet ‘liquid cooling’ is often interpreted by the public as ‘water cooling’, feeding a perception that data centres are draining natural water sources to cool servers.

In practice, this is rarely the case. While data centres of the past have relied heavily on evaporative cooling towers to deliver lower Power Usage Effectiveness, today we see a strong and consistent trend towards lower Water Usage Effectiveness through smarter cooling and sustainable design. Developments in technology are making water-free cooling possible, too, with half of England’s data centres using waterless cooling. Many operators use non-water coolants and closed-loop systems that conserve resources.

Data centres as part of the community 

Addressing public concerns will require a change in how operators think about their place in communities. Once built, a data centre becomes part of the local fabric and the company behind it, a neighbour. Developers need to view that relationship as more than transactional. They must demonstrate that growth is supported by resilient grids capable of meeting new demand without destabilising supply or driving up cost.

Water and power are essential resources, so public concern is understandable. It’s therefore important that operators show that density and efficiency can be achieved without disproportionate environmental impact. The continued rollout of AI-ready data centres will depend as much on social alignment as on advances in chip performance.

That alignment will be tested in 2026 and beyond as another wave of high-density deployments arrives. Based on NVIDIA’s product roadmap, we already have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure.

NVIDIA’s Chief Executive Jensen Huang introduced the DSX data centre architecture at GTC 2025 in Washington DC, a framework designed to make it easier for developers with limited experience to deploy large-scale, AI-ready facilities. In effect, it offers a global blueprint for gigawatt-scale ‘AI factories’.

A positive outcome of this will be a stronger push towards supply chain standardisation. Companies such as Vertiv, Schneider Electric and Eaton are aligning around modular power and cooling systems that are easily integrated into these architectures. Nvidia, AMD and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure can be deployed, the faster their chips can deliver the required compute capacity.

Standardisation, then, becomes a commercial and operational imperative, but it also reinforces the need for transparency and shared responsibility.

Efficiency and expansion 

Behind all of this lies the computational driver: the transformer model. These AI architectures process and generate language, code or other complex data at scale — the foundation of today’s generative AI. They are, however, enormously power-hungry, and even though it’s reasonable to expect a few DeepSeek-type breakthroughs in 2026 – discoveries that achieve similar performance with far less energy thanks to advances in algorithms, hardware and networking – we shouldn’t expect demand for power to drop.

The technical roadmap during 2026 is clear. We are heading towards greater density, wider uptake of liquid cooling and further standardisation. With data centres running as efficiently and sustainably as possible, developers and operators will need to establish trust with local stakeholders for the resources required to develop and power the AI factories that will drive a new era of industrial innovation.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

We’re going On the Record with a new column series

By:DCR
28 January 2026 at 15:20

Data Centre Review is launching a new monthly column series, dubbed On the Record, which will feature regular commentary from named contributors across the data centre industry.

The new series is designed to provide a spotlight to select voices to share perspectives on the issues shaping the sector, from resilience and energy regulation to skills and emerging technologies.

Data Centre Review has always been a town hall – a place where diverse opinions are allowed to shine, and that will continue. However, unlike one-off guest comment pieces, On the Record is structured as a recurring series, with contributors publishing on Data Centre Review each and every month. That gives our readers a consistent set of industry viewpoints to follow over time.

What to expect from On the Record

Each On the Record column will offer a direct, accountable viewpoint from a recognised organisation or specialist contributor. Topics will span the challenges and opportunities facing data centres today, including:

  • Design, build and operations best practice
  • Emerging trends and technology impacts
  • Energy, sustainability and regulation
  • Infrastructure and resilience
  • Skills, talent and leadership

We’re launching the series with two initial contributors: 

On the Record with the Data Centre Alliance – This column will bring an industry-wide perspective on standards, priorities and the big conversations influencing the sector.

On the Record with Critical Careers – This column will focus on careers and representation in the industry, with an emphasis on women entering data centres and the barriers that still exist

The first On The Record with the Data Centre Alliance is now live, exploring the topic of water scarcity and whether the UK’s data centre industry can do more when it comes to reducing its water usage. You can read that here. 

Additional contributors are expected to be added over time, expanding the range of organisations and topics represented within the series. 

How to avoid drowning in data at the expense of freshwater supplies

28 January 2026 at 15:18

TechBuyer’s Astrid Wynne argues that as AI drives up cooling demand, water stewardship must become a core design principle – not an afterthought.

As artificial intelligence accelerates demand for data centre capacity, the conversation around sustainability is shifting. Energy efficiency has long dominated the agenda, but water, the silent resource underpinning cooling systems, has emerged as a critical concern.

Scoping the problem on site and throughout operations, and providing practical guidance to avoid extra strain on freshwater use, were key aims of The Data Centre Alliance’s Drowning in Data best practice paper, published in October 2025. Developed by leading industry experts, the paper explains how to avoid freshwater use, how to account for the water footprint of energy use, and how to maximise water efficiency in cooling systems.

Growing awareness of water scarcity

Water scarcity is no longer a distant threat. Today, four billion people experience severe water stress for at least one month each year, according to a 2025 World Economic Forum report. In the UK, the deficit between the infrastructure capacity to provide clean water and the demands placed on it by agriculture, housing and industrial needs is in the billions of litres a day. The growing number of data centres, and reports of their on-site water use, began to raise alarm bells in the mainstream press in early 2025.

With Keir Starmer’s announcement of projected ‘AI Growth Zones’ early in the year came articles from the BBC raising concern that the UK’s AI ambitions could lead to water shortages. While it is true that high-density computing drives up cooling requirements, there are also numerous technologies to address this.

Large evaporative cooling towers, which can consume tens of thousands of cubic metres a year, are not popular in the UK. By August, a techUK report had found that half of England’s data centres now use waterless cooling. Other reports also suggested that used water could be deployed to cool data centres.

Industry guidance

Just as with carbon emissions, data centre water consumption is an issue both on site and through the energy supply chain. The authors of the Drowning in Data paper recognised this early on and structured the guidance around water efficiency in the cooling system; the type of water drawn on site and how it can be treated; and the water footprint of the energy supply.

The paper shows that operators, vendors and policymakers are collaborating to tackle water use with the same rigour applied to energy efficiency—and recognises that it is a system with many moving parts.

The fundamentals of water stewardship

The paper outlines six actionable principles for reducing water impact. It also recognises that these are interrelated, and that they have a relationship with energy efficiency. A brief overview is given below:

  1. Evaluate cooling systems
    Not all cooling systems are created equal. Designs for a 5 MW data centre in London that involve cooling towers can be around 38,000 m³/year, whereas adiabatic coolers can be around 800 m³/year, and dry coolers would result in no direct water use. Selecting the right technology can cut water use by orders of magnitude.
  2. Minimise the water footprint of the energy used
    Beyond direct consumption, electricity generation carries an embedded water cost. No studies have yet defined the proportion for AI workloads, but studies on another intensive compute operation – Bitcoin – suggest that most of this sits in the energy footprint. Maximising energy efficiency, and using energy supplies with lower water footprints, is a key part of good water stewardship.
  3. Design with the surrounding environment in mind
    Cooling systems must take into account the surrounding environment in order to balance savings in direct water use (through reduced cooling demand) with indirect water waste through increased electricity use overall.
  4. Design with non-potable water in mind
    Grey water systems and rainwater harvesting can offset potable water demand, reducing strain on municipal supplies. However, different water qualities require different levels of electricity to make them suitable for cooling systems, and this needs to be considered.
  5. Apply systems thinking
    The surrounding community’s needs also play a part. In water-stressed areas, reducing direct water use will be a priority. In cooler, wetter areas, priority may shift towards the benefits of heat generation from the data centre—captured by direct-to-chip cooling and fed into district heating systems.
  6. Introduce circular economy principles for hardware refresh
    Extending IT equipment life and promoting reuse reduces embodied water in manufacturing – a hidden but significant component of total water impact. According to the Green Electronics Council, the manufacture of a single server requires 1,500–2,000 gallons of water.

Where next for water use in the data centre sector

Continuing press coverage in recent months shows that data centres are under scrutiny for their water use in a way that other sectors are not. A December 2025 article in The Guardian is one such example. With researchers increasingly turning towards the water footprint of AI, mainstream media is becoming more aware of indirect water consumption as a result of energy use.

No similar stories circulate about heavy industry or manufacturing, which are more established and more likely to fly under the radar. Whether or not this is fair is a moot point; water is the next frontier in data centre sustainability. As the industry scales to meet digital demand, water stewardship must become a core design principle, not an afterthought.

The Drowning in Data paper provides insight into how the sector can address this with an approach that balances operational resilience with environmental responsibility. However, it is just the start of a long, complex process of understanding impacts and balancing competing demands. The Data Centre Alliance welcomes suggestions and collaborations that can move the conversation forward. 

You can read the full paper and join the discussion at dcauk.org.

DCR Predicts: Is 2026 the year cloud customers take back control?

28 January 2026 at 11:10

James Lucas, CEO at CirrusHQ, argues that cloud autonomy and ‘choice by default’ will accelerate as organisations push back on lock-in, cost shocks, and rigid contracts.

Over the last 12 months, we’ve seen more organisations recognise the value of the cloud. For us, there’s been a significant uptick in public sector organisations taking a cloud-native approach – something I expect will continue at pace into 2026.

As organisations realise the benefits of the cloud through smaller projects aligned with best practice, it’s encouraging to see them consider future migrations and deployments. But there are other developments I foresee over the next year.

Cloud autonomy will become a reality

Gone are the days when organisations wanted the security of a lengthy contract with a single vendor. Legacy vendor lock-in in the cloud remains a challenge for many – and we’ve seen a sharp rise in organisations being hit with significant cost hikes and lengthy contract extensions. Increasingly, they’re breaking away from the status quo and demanding cloud infrastructure that gives them the flexibility their business requires.

How organisations want to work with vendors has evolved significantly since many of those contracts were first signed. With cost and commitment under greater scrutiny, I expect more organisations will recognise the value of cloud marketplaces in 2026.

Marketplaces can give organisations the autonomy to pick and choose the services and tools they need, when they need them – without the pain of restriction. And when no one knows what might be around the corner from a macroeconomic or geopolitical perspective, organisations will increasingly seek to maintain control over the business operations that are within their power.

Shadow IT vs data sovereignty

Hyperscalers are creating and launching sovereign cloud offerings to guarantee where customer data is stored and processed. But organisations using cloud services must also ensure shadow IT doesn’t undermine sovereignty efforts or increase non-compliance. Enterprises need to take this seriously in 2026.

Many IT environments can benefit from stronger best practice – regardless of whether an organisation is pursuing something as complex as sovereign cloud. Much like the adage “if you don’t test your backups, you don’t have any,” in 2026 organisations should recognise that if they don’t have automated, detailed reporting on policy compliance, then they effectively don’t have it at all.

Without automated oversight, IT estates can become unwieldy, unmanageable, and non-compliant – and often end up duplicating work and data. By automating the detection of non-compliant activity, organisations can adopt a ‘shift left’ approach: addressing issues earlier in the process and keeping the environment secure and manageable.

AI and the cloud will be co-dependent

Unsurprisingly, AI will remain top of mind for organisations over the coming year. While many will look to AI to drive transformation, it will require a solid data foundation to thrive.

As we saw recently at AWS re:Invent, as the cloud enters a new phase of maturity in 2026, major platform investments will likely focus on three areas: advanced AI, data consolidation, and financial control.

From what we’re seeing in the wider market, cloud platforms will make AI development more dependable by automatically managing steps, fixing errors, and tracking complex jobs – dramatically improving the stability of AI tools and long-running workloads.

For those concerned about AI’s environmental impact over the coming year and beyond, the answer isn’t halting progress. It’s treating climate, power, and water considerations as measurable factors to be managed alongside performance and cost. Thoughtful choices around architecture, suppliers, and workload optimisation can help ensure AI delivers value while aligning with sustainability goals.

Ultimately, success in 2026 won’t just be measured by migration speed. It will be measured by whether organisations can combine the foundational stability of the cloud with proactive compliance – so technology decisions are considered, deliberate, and future-proof.

That means getting cloud systems ready to operate more efficiently and intelligently. Making the cloud work harder and deliver maximum value for the business is clearly the direction we’re headed – and it’s a positive shift I fully support.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026
❌