Normal view

Received before yesterday

DCR Predicts: Can data centres become ‘good neighbours’ in 2026?

2 February 2026 at 08:18

Gareth Williams, Director, UK, India, Middle East and Africa Data Centres and Technology Leader at Arup, argues that 2026 should be the turning point for designing facilities that stabilise grids, steward water, and deliver visible community benefits.

2026 marks a pivotal opportunity to transform how data centres are seen in the public eye. Much has been done to change perceptions from anonymous ‘black boxes’ into strategic assets. Now we must ensure they are seen as positive partners for local energy, water and communities.

That means designing for reciprocity: centres that not only consume, but also stabilise grids, steward scarce water, create jobs, share heat, and leave biodiversity richer than before. This is what I see in briefs for clients, planners and operators alike: putting community benefit at the heart of developments, not as an afterthought.

Energy: from load to flexible, clean, locally useful power

AI-centric workloads are driving volatile, high-density demand, making efficiency gains harder. This is forcing smarter energy strategies, from chip-level liquid cooling and rack-level heat recovery to intelligent workload management.

We will increasingly see data centres act as energy hubs, with co-located renewables, multi-hour batteries, combined heat and power systems, and grid-service participation (frequency response, demand shifting) from day one. Pilot policies already treat facilities as grid allies, including heat-reuse quotas and flexible-access contracts. Operating models will increasingly shift compute to areas with surplus wind and sun — an approach that could also route non-time-critical training to regions with surplus energy.

Baseload energy supply options will mature unevenly. Some operators are testing power purchase agreements linked to small modular reactors to accelerate capacity. Others will combine hydrogen fuel cells for peak resilience with smart microgrids and local renewables. Regardless, the key is to offer two-way benefits: better uptime for operators and measurable support for national grid stability.

Water: design for scarcity, stewardship and circularity

Cooling demand will keep rising with denser compute. This can shift demand in some cases from air to liquid solutions, but the next step is water stewardship by design: closed-loop systems, immersion cooling where appropriate, and zero-freshwater ambitions in stressed catchments.

The Climate Neutral Data Centre Pact points to a water usage efficiency trajectory from ~1.8 L/kWh to 0.4 L/kWh in water-stressed sites by 2040. This is ambitious, but achievable if we switch to non-potable sources and track upstream and downstream impacts.

Practical levers for 2026 include site-level greywater reuse, recycled/industrial ‘brackish’ water sources, rainwater harvesting with sponge landscapes, and seawater cooling at coastal hubs — where environmental permissions and biodiversity management are designed from the outset. Singapore’s Green Data Centre Roadmap shows how regulation can drive cooling tower efficiency upgrades, blowdown recycling and cycles-of-concentration improvements that cut freshwater withdrawals at scale.

Community engagement: early, transparent, beneficial

Engagement still starts too late on many projects. Flip the sequence: begin with benefits, then shape the scheme around agreed outcomes. Practical packages include renewable partnerships that share surplus power; reuse district heat; build biodiversity corridors and accessible green space; offer fibre upgrades that lift local connectivity; and provide STEM education funding and jobs for technicians and landscapers.

Community-first design de-risks approvals and earns trust. These aren’t gestures; they increase value over the life of the campus. This ‘good neighbour’ lens is the fastest way to retire the ‘black box’ image and demonstrate tangible contributions to people’s lives.

Technology: intelligent management, edge resilience, advanced cooling

AI already plays a crucial role in enhancing operations, and it’s only getting smarter. One example is Digital Realty’s collaboration with Ecolab, which identifies real-time operational inefficiencies in cooling systems and recommends improvements to conserve water.

AI-powered management will become the operating system of next-generation facilities, actively orchestrating workloads, power and cooling to maximise efficiency. Intelligent monitoring will drive automation for predictive maintenance, spotting deteriorating components early and scheduling interventions without disrupting SLAs.

At campus scale, hyperscale modular architecture (standardised power and cooling blocks with repeatable controls) will enable capacity expansion and help manage AI surges. And at rack level, advanced liquid cooling systems (direct-to-chip and rear-door heat exchangers) will integrate with smart controls to maximise performance while minimising power and water use.

Materials: low-carbon, modular, designed for circular recovery

Measuring whole-life carbon is vital to managing the sustainability of buildings and critical infrastructure, including data centres. The materials brief should be explicit: certified low-carbon or recycled steel, geopolymer concrete where feasible, and engineered timber for appropriate architectural elements and shading. Envelope design, daylighting and thoughtful material selection can cut operational and embodied impacts while improving working environments.

2026 will see increasing design for disassembly and recovery: standardised rack aisles, traceable components, and procurement that favours reclaimed metals and remanufactured cooling equipment. We should expect to link digital asset plans with physical asset lifecycle strategies, ensuring that refresh cycles trigger material recovery instead of waste.

Acceleration: scale fast, standardise what matters, customise what counts

Large, out-of-town campuses with repeatable, prefabricated/containerised solutions are the only way to match AI demand responsibly. To make this happen, owners and operators will need to standardise the backbone (power blocks, cooling modules, monitoring stacks), then customise for local energy and water contexts.

Reduced bespoke engineering means faster approvals, lower risk, and clearer community commitments (heat and water reuse, biodiversity) baked into template designs. Energy policies that treat campuses as anchor tenants and reward flexibility services will further cut delivery timelines while raising public value.

Conclusion: a systems brief

This is the year to design data centres as reciprocal systems: energy hubs that stabilise grids and disclose 24/7 clean sourcing; water stewards that minimise freshwater draw and close loops; and neighbours that fund skills, share heat, and leave landscapes better than before.

With multidisciplinary teams and a place-first brief, owners and operators can move from compliance to contribution — engineering facilities that are engines of local resilience and global compute. If we build them this way, the sector will be remembered not for what it consumed, but for what it enabled.

This article is part of our DCR Predicts 2026 series. The series has now offficially concluded, you can catch all the articles at the link below.

DCR Predicts 2026

DCR Predicts: Is data sovereignty about to trigger a cloud rethink?

30 January 2026 at 08:09

With regulators and boards paying closer attention to where sensitive data sits, Fred Lherault, Field CTO EMEA/Emerging Markets at Pure Storage, outlines why hybrid strategies and selective cloud repatriation are likely to accelerate as AI scales.

After two years of accelerated AI experimentation, rising expectations, and rapid vendor expansion, I believe 2026 will mark an important inflection point for organisations building modern data infrastructure. Many enterprises are now moving past the initial hype cycle and focusing on what is required to operationalise AI reliably and at scale.

That shift is already visible across customers evaluating how AI will integrate into production workflows. If we extrapolate from these trends, several themes are likely to influence how organisations design their data pipelines, storage architectures, and cloud strategies in the year ahead. The following reflects my perspective on how these dynamics may unfold.

From hype to production: data readiness and inference become the priority

While some organisations are still convincing themselves how essential AI is, most are now realistic about what they do, and, crucially, do not deploy. The switch in focus from training to inference means that, without a robust inference platform, and the ability to get data ready for AI pipelines, organisations are set to fail.

As AI inference workloads become part of the production workflow, organisations will have to ensure their infrastructure supports not just fast access, but also high availability, security, and non-disruptive operations. Not doing this will be costly, both from a results perspective, and an operational one.

However, most organisations are still struggling with the data readiness challenge. Getting data AI-ready requires going through many phases, such as data ingestion, curation, transformation, vectorisation, indexing, and serving. Each of these phases can typically take days or weeks, and delay the point when the AI project’s results can be evaluated by the business.

Organisations who care about using AI with their own data will focus on streamlining and automating the whole data pipeline for AI – not just for faster initial results evaluation, but also for continuous ingestion of newly created data, and iteration.

This remains one of the most significant barriers to AI adoption. Enterprise data is often dispersed across legacy systems, cloud environments, and archives, which makes it difficult to access and prepare at the speed AI workflows require. In 2026, we can expect this challenge to become more pronounced as organisations look to extract value from all of their data, regardless of location. Manual preparation will not scale to meet these requirements. Automated pipelines, richer metadata, and integrated data platforms will become essential foundations for organisations aiming to use AI with continuous, repeatable outcomes.

AI and data sovereignty will reshape cloud strategy, and accelerate selective repatriation

The dual issues of AI and data sovereignty are driving concerns about where data is stored, and how organisations can maintain trust, and guarantee access in the event of any issues. In order to extract value from AI, it is critical for organisations to know where their most important data is, and that it is ready for use.

Concerns about data sovereignty are also driving more organisations to reconsider their cloud strategy. Rising geopolitical tensions and regulatory pressure will shape nations’ data centre strategies in 2026 in response. Governments, in particular, want to minimise the risk that access to data could be used as a threat or negotiating tactic. Organisations should be similarly wary, and prepare themselves.

We are already seeing early indicators of this shift. Boards and regulators are paying closer attention to where sensitive and strategically important data resides, driven, in part, by evolving regulatory frameworks such as GDPR, DORA, and guidance emerging from the EU AI Act. This scrutiny is prompting many organisations to reassess cloud strategies that once prioritised cost or convenience over sovereignty and resilience.

As a result, hybrid models are likely to expand, with more AI-critical datasets and workloads positioned closer to where they can be governed, audited, and controlled. This is not a retreat from the cloud, but a more deliberate, workload-specific leveraging of it.

KubeVirt will scale into mainstream production

The recent changes to VMware licensing that followed Broadcom’s acquisition have kickstarted a conversation around alternative approaches to virtualised workloads. KubeVirt, which allows management of virtual machines through Kubernetes, provides one such alternative—a platform that encompasses both virtualisation and containerisation needs—and I expect it will take off in 2026.

The KubeVirt offering has matured to the point where it is suitable for enterprise needs. For many, moving to another virtualisation provider is a huge upheaval, and, while it may eventually save money, it always comes with a set of limitations and constraints, especially when it comes to everything that surrounds the virtualisation platform (data protection, security, networking, and so on).

KubeVirt enables organisations to leverage the growing Kubernetes ecosystem to more quickly realise the value in a platform which provides the capabilities to manage, orchestrate, and monitor not just VMs, but also containers, regardless of how the proportion of those evolves over time.

KubeVirt’s momentum reflects a broader shift in how organisations want to operate their infrastructure. As containerisation becomes standard and AI workloads scale, many teams are looking for a unified operational model that reduces complexity, and avoids long-term platform lock-in. Consolidating virtual machines and containers under a single control plane aligns with this direction.

If adoption increases as predicted, storage and data services will evolve in parallel, with greater demand for persistent, low-latency, Kubernetes-native storage that can support mixed-workload environments.

2026 will be about discipline, not disruption

If the past two years have been defined by rapid disruption, driven largely by AI, 2026 is likely to be a year where organisations prioritise the operational foundation required for long-term success. Enterprises will:

  • Move from AI experimentation to consistent, production-grade inference models
  • Modernise data pipelines to support continuous data readiness
  • Reassess cloud strategies with a sharper focus on sovereignty, governance, and resilience
  • Evaluate VMware alternatives, such as KubeVirt, which support a unified approach to virtual machines and containers

The organisations able to take these shifts in their stride will be best placed for success in 2026.

This article is part of our DCR Predicts 2026 series. The series will officially end on Monday, February 2 with a special bonus prediction.

DCR Predicts 2026

DCR Predicts: The new bottleneck for AI data centres isn’t technology – it’s permission

29 January 2026 at 08:23

As gigawatt-scale sites move from abstract infrastructure to highly visible ‘AI factories’, Tate Cantrell, Verne CTO, argues that grid capacity, water myths, and local sentiment will decide what actually gets built.

The industry in 2026 will need to get ready for hyper-dense, gigawatt-scale data centres, but preparation will be more complicated than purely infrastructure design. AI’s exploding computational demand is pushing designers to deliver facilities with greater density that consume a growing volume of power and challenge conventional cooling.

The growth of hyperscale campuses risks colliding with a public increasingly aware of power and water consumption. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to accept.

A growing public awareness of data centres

The sector has entered an era of scale that would have seemed implausible a few years ago. Internet giants are investing billions of dollars in facilities that redefine large-scale and are reshaping the market. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services.

But their impact extends beyond the data centre industry: the communities hosting these ‘AI factories’ are being transformed, too.

This is leading to engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with power distribution systems and cooling infrastructure. As these sites become more visible, public awareness of the resources they consume is growing. The data centre has become a local landmark – and it’s under scrutiny.

Power versus perception

Power is one area receiving attention. Data centre growth is coinciding with the perception that hyperscale operators are competing for grid capacity or diverting renewable power that might otherwise support local decarbonisation. There is no shortage of coverage suggesting data centres are pushing up energy prices, too.

These perceptions have already had consequences. In the UK, a proposed 90 MW facility near London was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete for electricity with what one campaign group leader called “power-guzzling behemoth”. In Belgium, grid operator Elia may limit the power allocated to operators to protect other industrial users.

It would not be surprising to see this reaction continue in 2026, despite the steps taken by all data centre operators to maximise power efficiency and sustainability.

Cool misunderstandings 

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100kW. The amount of heat produced in such a dense space exceeds the capabilities of air-based cooling, driving the move to more efficient liquid systems.

Yet ‘liquid cooling’ is often interpreted by the public as ‘water cooling’, feeding a perception that data centres are draining natural water sources to cool servers.

In practice, this is rarely the case. While data centres of the past have relied heavily on evaporative cooling towers to deliver lower Power Usage Effectiveness, today we see a strong and consistent trend towards lower Water Usage Effectiveness through smarter cooling and sustainable design. Developments in technology are making water-free cooling possible, too, with half of England’s data centres using waterless cooling. Many operators use non-water coolants and closed-loop systems that conserve resources.

Data centres as part of the community 

Addressing public concerns will require a change in how operators think about their place in communities. Once built, a data centre becomes part of the local fabric and the company behind it, a neighbour. Developers need to view that relationship as more than transactional. They must demonstrate that growth is supported by resilient grids capable of meeting new demand without destabilising supply or driving up cost.

Water and power are essential resources, so public concern is understandable. It’s therefore important that operators show that density and efficiency can be achieved without disproportionate environmental impact. The continued rollout of AI-ready data centres will depend as much on social alignment as on advances in chip performance.

That alignment will be tested in 2026 and beyond as another wave of high-density deployments arrives. Based on NVIDIA’s product roadmap, we already have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure.

NVIDIA’s Chief Executive Jensen Huang introduced the DSX data centre architecture at GTC 2025 in Washington DC, a framework designed to make it easier for developers with limited experience to deploy large-scale, AI-ready facilities. In effect, it offers a global blueprint for gigawatt-scale ‘AI factories’.

A positive outcome of this will be a stronger push towards supply chain standardisation. Companies such as Vertiv, Schneider Electric and Eaton are aligning around modular power and cooling systems that are easily integrated into these architectures. Nvidia, AMD and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure can be deployed, the faster their chips can deliver the required compute capacity.

Standardisation, then, becomes a commercial and operational imperative, but it also reinforces the need for transparency and shared responsibility.

Efficiency and expansion 

Behind all of this lies the computational driver: the transformer model. These AI architectures process and generate language, code or other complex data at scale — the foundation of today’s generative AI. They are, however, enormously power-hungry, and even though it’s reasonable to expect a few DeepSeek-type breakthroughs in 2026 – discoveries that achieve similar performance with far less energy thanks to advances in algorithms, hardware and networking – we shouldn’t expect demand for power to drop.

The technical roadmap during 2026 is clear. We are heading towards greater density, wider uptake of liquid cooling and further standardisation. With data centres running as efficiently and sustainably as possible, developers and operators will need to establish trust with local stakeholders for the resources required to develop and power the AI factories that will drive a new era of industrial innovation.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

How to avoid drowning in data at the expense of freshwater supplies

28 January 2026 at 15:18

TechBuyer’s Astrid Wynne argues that as AI drives up cooling demand, water stewardship must become a core design principle – not an afterthought.

As artificial intelligence accelerates demand for data centre capacity, the conversation around sustainability is shifting. Energy efficiency has long dominated the agenda, but water, the silent resource underpinning cooling systems, has emerged as a critical concern.

Scoping the problem on site and throughout operations, and providing practical guidance to avoid extra strain on freshwater use, were key aims of The Data Centre Alliance’s Drowning in Data best practice paper, published in October 2025. Developed by leading industry experts, the paper explains how to avoid freshwater use, how to account for the water footprint of energy use, and how to maximise water efficiency in cooling systems.

Growing awareness of water scarcity

Water scarcity is no longer a distant threat. Today, four billion people experience severe water stress for at least one month each year, according to a 2025 World Economic Forum report. In the UK, the deficit between the infrastructure capacity to provide clean water and the demands placed on it by agriculture, housing and industrial needs is in the billions of litres a day. The growing number of data centres, and reports of their on-site water use, began to raise alarm bells in the mainstream press in early 2025.

With Keir Starmer’s announcement of projected ‘AI Growth Zones’ early in the year came articles from the BBC raising concern that the UK’s AI ambitions could lead to water shortages. While it is true that high-density computing drives up cooling requirements, there are also numerous technologies to address this.

Large evaporative cooling towers, which can consume tens of thousands of cubic metres a year, are not popular in the UK. By August, a techUK report had found that half of England’s data centres now use waterless cooling. Other reports also suggested that used water could be deployed to cool data centres.

Industry guidance

Just as with carbon emissions, data centre water consumption is an issue both on site and through the energy supply chain. The authors of the Drowning in Data paper recognised this early on and structured the guidance around water efficiency in the cooling system; the type of water drawn on site and how it can be treated; and the water footprint of the energy supply.

The paper shows that operators, vendors and policymakers are collaborating to tackle water use with the same rigour applied to energy efficiency—and recognises that it is a system with many moving parts.

The fundamentals of water stewardship

The paper outlines six actionable principles for reducing water impact. It also recognises that these are interrelated, and that they have a relationship with energy efficiency. A brief overview is given below:

  1. Evaluate cooling systems
    Not all cooling systems are created equal. Designs for a 5 MW data centre in London that involve cooling towers can be around 38,000 m³/year, whereas adiabatic coolers can be around 800 m³/year, and dry coolers would result in no direct water use. Selecting the right technology can cut water use by orders of magnitude.
  2. Minimise the water footprint of the energy used
    Beyond direct consumption, electricity generation carries an embedded water cost. No studies have yet defined the proportion for AI workloads, but studies on another intensive compute operation – Bitcoin – suggest that most of this sits in the energy footprint. Maximising energy efficiency, and using energy supplies with lower water footprints, is a key part of good water stewardship.
  3. Design with the surrounding environment in mind
    Cooling systems must take into account the surrounding environment in order to balance savings in direct water use (through reduced cooling demand) with indirect water waste through increased electricity use overall.
  4. Design with non-potable water in mind
    Grey water systems and rainwater harvesting can offset potable water demand, reducing strain on municipal supplies. However, different water qualities require different levels of electricity to make them suitable for cooling systems, and this needs to be considered.
  5. Apply systems thinking
    The surrounding community’s needs also play a part. In water-stressed areas, reducing direct water use will be a priority. In cooler, wetter areas, priority may shift towards the benefits of heat generation from the data centre—captured by direct-to-chip cooling and fed into district heating systems.
  6. Introduce circular economy principles for hardware refresh
    Extending IT equipment life and promoting reuse reduces embodied water in manufacturing – a hidden but significant component of total water impact. According to the Green Electronics Council, the manufacture of a single server requires 1,500–2,000 gallons of water.

Where next for water use in the data centre sector

Continuing press coverage in recent months shows that data centres are under scrutiny for their water use in a way that other sectors are not. A December 2025 article in The Guardian is one such example. With researchers increasingly turning towards the water footprint of AI, mainstream media is becoming more aware of indirect water consumption as a result of energy use.

No similar stories circulate about heavy industry or manufacturing, which are more established and more likely to fly under the radar. Whether or not this is fair is a moot point; water is the next frontier in data centre sustainability. As the industry scales to meet digital demand, water stewardship must become a core design principle, not an afterthought.

The Drowning in Data paper provides insight into how the sector can address this with an approach that balances operational resilience with environmental responsibility. However, it is just the start of a long, complex process of understanding impacts and balancing competing demands. The Data Centre Alliance welcomes suggestions and collaborations that can move the conversation forward. 

You can read the full paper and join the discussion at dcauk.org.

DCR Predicts: Is 2026 the year cloud customers take back control?

28 January 2026 at 11:10

James Lucas, CEO at CirrusHQ, argues that cloud autonomy and ‘choice by default’ will accelerate as organisations push back on lock-in, cost shocks, and rigid contracts.

Over the last 12 months, we’ve seen more organisations recognise the value of the cloud. For us, there’s been a significant uptick in public sector organisations taking a cloud-native approach – something I expect will continue at pace into 2026.

As organisations realise the benefits of the cloud through smaller projects aligned with best practice, it’s encouraging to see them consider future migrations and deployments. But there are other developments I foresee over the next year.

Cloud autonomy will become a reality

Gone are the days when organisations wanted the security of a lengthy contract with a single vendor. Legacy vendor lock-in in the cloud remains a challenge for many – and we’ve seen a sharp rise in organisations being hit with significant cost hikes and lengthy contract extensions. Increasingly, they’re breaking away from the status quo and demanding cloud infrastructure that gives them the flexibility their business requires.

How organisations want to work with vendors has evolved significantly since many of those contracts were first signed. With cost and commitment under greater scrutiny, I expect more organisations will recognise the value of cloud marketplaces in 2026.

Marketplaces can give organisations the autonomy to pick and choose the services and tools they need, when they need them – without the pain of restriction. And when no one knows what might be around the corner from a macroeconomic or geopolitical perspective, organisations will increasingly seek to maintain control over the business operations that are within their power.

Shadow IT vs data sovereignty

Hyperscalers are creating and launching sovereign cloud offerings to guarantee where customer data is stored and processed. But organisations using cloud services must also ensure shadow IT doesn’t undermine sovereignty efforts or increase non-compliance. Enterprises need to take this seriously in 2026.

Many IT environments can benefit from stronger best practice – regardless of whether an organisation is pursuing something as complex as sovereign cloud. Much like the adage “if you don’t test your backups, you don’t have any,” in 2026 organisations should recognise that if they don’t have automated, detailed reporting on policy compliance, then they effectively don’t have it at all.

Without automated oversight, IT estates can become unwieldy, unmanageable, and non-compliant – and often end up duplicating work and data. By automating the detection of non-compliant activity, organisations can adopt a ‘shift left’ approach: addressing issues earlier in the process and keeping the environment secure and manageable.

AI and the cloud will be co-dependent

Unsurprisingly, AI will remain top of mind for organisations over the coming year. While many will look to AI to drive transformation, it will require a solid data foundation to thrive.

As we saw recently at AWS re:Invent, as the cloud enters a new phase of maturity in 2026, major platform investments will likely focus on three areas: advanced AI, data consolidation, and financial control.

From what we’re seeing in the wider market, cloud platforms will make AI development more dependable by automatically managing steps, fixing errors, and tracking complex jobs – dramatically improving the stability of AI tools and long-running workloads.

For those concerned about AI’s environmental impact over the coming year and beyond, the answer isn’t halting progress. It’s treating climate, power, and water considerations as measurable factors to be managed alongside performance and cost. Thoughtful choices around architecture, suppliers, and workload optimisation can help ensure AI delivers value while aligning with sustainability goals.

Ultimately, success in 2026 won’t just be measured by migration speed. It will be measured by whether organisations can combine the foundational stability of the cloud with proactive compliance – so technology decisions are considered, deliberate, and future-proof.

That means getting cloud systems ready to operate more efficiently and intelligently. Making the cloud work harder and deliver maximum value for the business is clearly the direction we’re headed – and it’s a positive shift I fully support.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

DCR Predicts – UK data centres are booming – but is the power running out?

By:DCR
27 January 2026 at 08:00

A panel of experts explore why grid capacity, connection queues, and rising AI power density are starting to dictate what can be built in 2026 – and where.

The UK’s data centre boom is accelerating, fuelled by the AI gold rush. Hyperscalers are expanding campuses and investment continues to flow, but the practical limits of growth are becoming harder to ignore.

Data centres already account for around 2.5% of the UK’s electricity consumption, and with AI workloads accelerating, that could rise sharply. Power availability, grid connection delays, planning constraints and sustainability pressures are no longer background considerations. As 2026 approaches, they are actively shaping what can be built, where, and how.

Power limits are no longer theoretical

For years, efficiency improvements helped offset rising demand, but that buffer is tiring quickly as AI is pushing power density beyond what many facilities were designed to support.

Skip Levens, Quantum’s Product Leader and AI Strategist, the LTO Program, sees a clear roadblock ahead. “In 2026, AI and HPC data centre buildouts will hit a non-negotiable limit: they cannot get more power into their data centres. Build-outs and expansions are on hold and power-hungry GPU-dense servers are forcing organisations to make hard choices.”

He suggests that modern tape libraries could be the solution to two pressing problems, “First by returning as much as 75% of power to the power budget to ’spend’ on GPUs and servers, while also keeping massive data sets nearby on highly efficient and reliable tape technology.”

Whether or not operators adopt that specific approach, the wider point holds. Growth is no longer just about adding capacity – it’s about how power is allocated and conserved within fixed limits.

Sustainability under pressure

Sustainability remains a defining theme for the sector, but the pace of AI-driven expansion is testing how deeply those commitments are embedded.

Terry Storrar, Managing Director at Leaseweb UK, describes the balancing act many operators are facing, “Sustainability is still the number one topic in the data centre industry. This has to work for the planet, but also from an economic perspective.

“We can’t keep running huge workloads and adding these to the grid,” he warns, “it’s simply not sustainable for the long term. So, there is huge investment into how we make technology do more for less. In the data centre industry, this translates into achieving significant power efficiencies.”

Mark Skelton, Chief Technology Officer at Node4, agrees, warning, “Data centres already consume around 2% of national power, while unchecked growth could push that to 10-15%, at a time when the grid is already strained and struggling to keep pace with soaring demand. In some areas, new developments are being delayed simply because the grid cannot deliver the required capacity quickly enough.”

To put this into perspective, Google’s new Essex facility alone is estimated to emit the same amount of carbon as 500 short-haul flights every year.

Grid delays, planning and skills gaps

There’s also a broader question of how well prepared the UK actually is for such a rapid scale-up in data centre infrastructure,

“Currently, the rush to build is overshadowing the need for a comprehensive approach that considers how facilities draw power and utilise water, as well as how their waste heat could be repurposed for nearby housing or industry,” Node4’s Skelton continues. “The technology to do this already exists, but adoption remains limited because there is little incentive or regulation to encourage it.”

In the UK, high-capacity grid connections can take over a year to secure, while planning delays and local opposition add further friction. Another roadblock is that “communities will increasingly challenge data centre expansion over water and energy use,” warns Curt Geeting, Acoustic Imaging Product Manager at Fluke. This is “pushing operators toward self-contained microgrids, hydrogen fuel cells, and other alternative power sources. Meanwhile, a growing shortage of skilled technicians and electricians will become a defining constraint.”

Geeting believes automation and I will be key to tackling some of these infrastructure roadblocks. “The data centre test and measurement market will enter 2026 on the brink of a major transformation driven by speed, density, and intelligence. Multi-fibre connectivity will expand rapidly to meet the bandwidth demands of AI-driven workloads, edge computing, and cloud-scale growth.

“Very small form factor connectors, multi-core fibre, and even air-core fibre technologies will begin reshaping how data moves through high-density environments – enabling faster transmission with lower latency. At the same time, automation and AI will take centre stage in testing and diagnostics, as intelligent tools and software platforms automate calibration tracking, compliance verification, and predictive maintenance across vast, complex facilities.”

Edge, sovereignty and a rethink of scale

Data centres remain the backbone of the digital economy, underpinning everything from cloud services to AI and edge computing. With the rapid rise in AI, there are concerns that the UK will struggle to keep pace.

“The AWS outage reminded everyone how risky it is to depend too heavily on centralised cloud infrastructure,” urges Bruce Kornfeld, Chief Product Officer at StorMagic. “When a single technical issue can disrupt entire operations at a massive scale, CIOs are realising that stability requires balance.

“In 2026, more organisations will move toward proven on-premises hyperconverged infrastructure for mission-critical applications at the edge. This approach integrates cloud connectivity to simplify operations, strengthen uptime and deliver consistent performance across all environments. AI will continue to accelerate this shift.”

“The year ahead will favour a shift toward simplicity, uptime and management,” he adds. “The organisations that succeed will be those that figure out how to avoid downtime with simple and reliable on-prem infrastructure to run local applications. These winners understand that chasing scale for its own sake does nothing but put them in a vulnerable position.” This redistribution may ease pressure on hyperscale campuses.

Looking to 2026

Looking ahead to 2026, the pressures facing UK data centres are unlikely to ease. Power constraints, grid delays and sustainability expectations are becoming long-term issues, not just temporary obstacles. While technologies like quantum computing may eventually reshape infrastructure design, they won’t resolve the immediate challenges operators face today. The UK still has an opportunity to lead in AI and digital infrastructure, but only if growth is planned with constraint in mind. Without clearer coordination, incentives and accountability, the rush to build risks locking inefficiencies into the system for years to come. 

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

DCR Predicts: Hybrid wins in 2026 – and storage has to catch up

23 January 2026 at 08:00

BS Teh, Chief Commercial Officer at Seagate Technology, outlines the security, edge and cost pressures pushing organisations beyond cloud-first.

The speed at which data is generated, used and stored today is unprecedented, and it continues to grow. In 2026, this trend will accelerate further, placing even greater demands on businesses.

Globally, this is reshaping not only the IT landscape but also the way companies innovate. Data has long been the foundation for innovation: it enables the development of new business models, the automation of processes, and the customisation of products to meet individual customer needs.

Teams are increasingly data-driven, using intelligent analytics to make faster, more informed decisions. At the same time, new forms of collaboration are emerging, powered by AI tools that consolidate knowledge and foster creative exchange.

In the age of AI, the value of data is more evident than ever: it is the most important asset in the digital economy. AI algorithms rely on analysing large and diverse datasets to identify patterns, generate forecasts and create value.

The better companies capture, structure and store their data, the more effectively they can leverage AI’s potential. This means businesses capable of managing and storing large, complex datasets efficiently gain critical competitive advantages. Those able to handle data securely, flexibly and sustainably are laying the foundation for innovation, agility and long-term success.

As a result, the data storage industry is at a turning point. It must not only keep up with exponential data growth, but also deliver solutions that meet demands for sustainability, scalability and cost efficiency. This transformation is largely driven by rapid advances in AI, which generate and process ever-growing volumes of data and set new requirements for storage infrastructure.

Hybrid strategies for the next generation of the data economy

The role of AI as a growth driver and ‘data multiplier’ is undeniable. AI has made data the most valuable asset in the digital economy, prompting a fundamental shift in enterprise computing – one that is already shaping data centre planning and investment today, and will continue to do so in 2026.

Nearly 75% of business leaders are moving from a ‘cloud-first’ approach to a hybrid model that combines public cloud, private infrastructure and edge computing. The reasoning is clear: companies want to enhance security, enable real-time edge applications and reduce costs, while meeting the growing demands of AI-driven workloads.

The conclusion is simple: all data has value today. Unlocking that value requires a smarter, hybrid approach to IT infrastructure and storage—one that meets both today’s and tomorrow’s needs.

Generative AI accelerates the content explosion

Another key driver of the data explosion is GenAI, which is fuelling a boom in digital content creation. GenAI is democratising content production: employees across departments can now generate text, images and videos within minutes. This fundamentally changes workflows and introduces a new, data-driven reality for businesses.

The impact is already clear. Nearly three-quarters of businesses report that GenAI enables employees outside traditional creative roles to create content independently – for example, in sales, HR or product management.

This results not only in more content, but also in new formats that were previously too costly or time-consuming to produce, such as personalised videos, training materials or marketing assets. Over two-thirds of companies report an overall increase in content files, with faster production speeds and greater variety.

Many now create multiple versions of the same content to target audiences more precisely. At the same time, average file sizes are growing, and nearly half of companies are storing larger volumes of similar or redundant files, further increasing storage demands.

To keep up, many companies plan to retain their data for longer and are increasingly adopting data-tiering and archiving strategies. While a majority have already expanded or modernised their storage infrastructure, only one-third feel fully prepared for the demands of GenAI workloads today.

By 2026, it will become clear which companies have set the right course for sustainable data management, and which risk being overwhelmed by the content explosion.

Future-proof storage strategies will determine success in 2026

The content explosion driven by GenAI is both an opportunity and a challenge. Companies that align their storage strategies accordingly will benefit twice: they will unlock the full potential of AI-generated content while maintaining control over their data assets.

Data is becoming a strategic resource, as AI is transforming creativity, productivity and entire industries at an unprecedented pace. Businesses should treat every single byte of data as valuable, because it truly is.

This article is part of our DCR Predicts 2026 series. Check back next week for our final week with a new prediction every day.

DCR Predicts 2026

Can nuclear keep the AI era online?

22 January 2026 at 11:47

Adhum Carter Wolde-Lule, Chief Strategy Officer at Prism Power Group, explores how rising AI-driven demand is exposing grid constraints, and why SMRs could become a long-term route to reliable, low-carbon power for data centres.

The rise of artificial intelligence and high-density computing is driving an extraordinary surge in data centre power consumption worldwide. Each new generation of AI models requires more computational capacity, and therefore more electricity, than the last.

Globally, data centre energy use is expected to jump from around 460 TWh in 2022 to more than 1,000 TWh by 2026. In the UK alone, data centres already account for 1-2% of national electricity demand, a figure set to climb sharply as AI workloads ramp up.

This accelerating demand is putting intense pressure on already stretched power grids. In regions such as West London, capacity constraints are so severe that new developments have been told to expect no grid connection until the mid-2030s. As a result, power availability has become the number one concern for data centre operators, with more than 90% of industry professionals reporting it as a top challenge.

The central dilemma is clear: how can data centres guarantee 24/7 uptime while meeting environmental commitments, when neither existing grid infrastructure nor intermittent renewable energy can fully meet their needs?

AI, cloud and high-performance computing facilities often require hundreds of megawatts of constant power, they require the same energy as a small city. Grid operators around the world are struggling to cope.

Data centres cannot tolerate power interruptions, ensuring round the clock reliability is non-negotiable, yet renewable energy sources such as wind and solar are inherently variable. Battery storage can smooth short-term fluctuations, but even the best systems today only provide several hours of firm supply.

This has left many facilities dependent on diesel generators for emergency coverage, an arrangement that is both environmentally damaging and inconsistent with corporate net zero strategies. The growing gap between intermittent renewable generation and constant data centre demand is forcing operators to look for alternative, dependable sources of clean power.

One technology drawing increasing attention is nuclear power and specifically, Small Modular Reactors (SMRs). Unlike traditional gigawatt-scale nuclear plants, SMRs are designed to be built in factories, transported in modules and assembled on site. Most designs fall in the 50-300 MW range, making them far more flexible and suitable for industrial campuses.

SMRs offer a rare combination of carbon-free energy, a compact footprint and the ability to provide continuous baseload power at very high-capacity factors.

They can be located close to where power is consumed, potentially even adjacent to large data centre clusters, reducing reliance on strained regional grids and cutting transmission losses.

Tech giants are already positioning themselves for a nuclear-powered future. Amazon Web Services has invested in an SMR developer and is acquiring a nearly 1GW nuclear campus to support its cloud operations. Microsoft has hired nuclear specialists and agreed to procure power from the restart of the Three Mile Island reactor. Google has committed to using power from six planned SMRs by 2030. Large colocation providers like Equinix and Switch have also signed agreements with microreactor developers.

The UK government aims to play a leading role in the global SMR market. In 2025, through its Great British Nuclear initiative, the Government selected the Rolls Royce SMR which is a 470 MW modular reactor. Backed by £2.5 billion of funding, the ambition is to deploy at least three reactors by the middle of the next decade, forming a foundation for a revived domestic nuclear industry.

For data centre developers, SMRs offer the possibility of stable, clean baseload power that can be positioned close to major AI hubs.

However, major hurdles remain. Nuclear projects, regardless of size, must undergo rigorous safety, regulatory and planning processes, which means long lead times. With the first reactors not expected until the mid-2030s, SMRs cannot solve today’s capacity crunch.

Despite these challenges, the UK risks falling behind if it does not move quickly. Other countries are already advancing SMR programmes, and delays could push deployment further into the 2040s. Because SMRs are a long-term solution, data centre operators must focus on bridging the gap between today’s energy constraints and tomorrow’s nuclear options.

The first priority is improving efficiency. Advances such as liquid and immersion cooling, smarter workload scheduling and more efficient chip designs can significantly reduce power needs, easing pressure on both grids and on-site systems.

Next is building on site energy resilience. Many operators are investing in solar arrays, large-scale batteries, gas turbines and fuel cells to reduce grid reliance.

The industry should also engage in early-stage partnerships to test emerging technologies, including microreactors, advanced geothermal or hydrogen ready systems. Power purchase agreements for existing nuclear or hydroelectric energy can also immediately strengthen sustainability and reliability.

AI is reshaping global energy demand faster than traditional infrastructure can adapt. The combination of unprecedented loads, strict uptime requirements and sustainability targets means data centres must rethink how they source power. SMRs represent a promising long-term answer of clean, stable and deployable close to the point of use. But they will not arrive in time to solve immediate constraints.

Over the next decade, data centre operators will need a blend of efficiency gains, renewable integration, on site generation and strategic planning, while preparing to take advantage of nuclear technologies as they mature.

Those who combine near term pragmatism with long term vision will be best positioned to deliver the reliable, sustainable, always on digital infrastructure that the AI era demands.

DCR Predicts: The ‘gig economy’ is coming to data centres in 2026

21 January 2026 at 08:00

Claire Keelan, Managing Director UK at Onnec, explains why project-based delivery models will become the backbone of new builds and upgrades in 2026, as traditional staffing struggles to match the pace and complexity of AI-led demand.

The data centre industry is constantly evolving. As AI workloads accelerate, operators are under mounting pressure to scale capacity while navigating skills shortages, infrastructure constraints and rising expectations around resilience. What worked a few years ago is no longer enough. Delivery models, workforce strategies and site design assumptions are all being tested.

In 2026, success will depend less on expansion and more on adaptability. Operators will need to rethink how projects are staffed, where capacity is built, and how existing assets are upgraded to meet AI demand. Flexible labour, broader talent inclusion, regional diversification and retrofitting will move from tactical considerations to strategic priorities.

The data centre ‘gig economy’ becomes backbone of delivery

Flexible labour models will underpin almost every new data centre project. Traditional staffing can’t scale at the speed AI demands. By 2026, flexible, crowdsourced, project-based teams will fill critical gaps across design, building, and operations. This shift isn’t about replacing expertise, it’s about redeploying it. Clear standards, accreditation, and safety frameworks will make flexibility viable at scale, turning part-time professionals and returning workers into a reliable, high-quality talent engine.

Women become central to meeting capacity targets 

With women making up less than 8% of the current workforce, the imbalance is holding the sector back. In 2026, diversity will shift from talking point to operational priority. This means targeted recruitment, retraining programmes, and mentorship networks designed to bring more women into engineering, safety, and leadership roles. Diversity will be treated as a business resilience issue, not just a social goal. This is because the industry can’t meet AI’s demands while sidelining a sizable portion of its potential workforce.

AI growth zones redraw the map

Regional ‘AI growth zones’ will emerge as the new engines of capacity. In 2026, Manchester, South Wales, and Scotland will continue to gain momentum thanks to lower land costs, renewable energy access, and close ties to academic institutions. This regional diversification will help balance power use and strengthen resilience against local constraints. The days of London and the M4 corridor as the single dominant hub are fading; the future of data centres is distributed, collaborative, and regionally connected.

Retrofitting becomes a reality check

With the UK home to one of the world’s largest portfolios of legacy data centres, over the next year operators must prove how fast they can innovate to stay ahead in the new AI landscape. 

In 2026, we’ll see a surge in retrofitted data centres as operators rush to upgrade legacy sites to meet soaring AI demand. Power and cooling will be complex, but cabling and network capacity will be the real bottlenecks. Poor-quality or overcrowded cabling limits density, throttles performance, and makes future upgrades almost impossible. 

Smart operators will invest early in high-grade structured systems that support modular expansion and long-term flexibility. ‘Retrofit-ready’ will become the new benchmark for responsible, future-proof design.

Looking into 2026

By 2026, the data centre sector will be defined less by how much capacity it builds and more by how intelligently it evolves. AI is compressing timelines, exposing fragility, and forcing long-term decisions into the present. Operators that treat this moment as a simple scaling challenge will struggle. Those that recognise it as a structural reset will set the pace.

Data centres are becoming critical national infrastructure for an AI-driven economy, and resilience will matter as much as raw performance. Leadership will belong to operators that move early, design for uncertainty, and embed adaptability. The question in 2026 is not who can grow fastest, but who can keep up when the rules keep changing.

This article is part of our DCR Predicts 2026 series. Come back every week in January for more.

DCR Predicts 2026

Can AI make data centres greener, or will it simply make them bigger?

20 January 2026 at 08:03

Peter Schwartz, Senior Technology Consultant at OryxAlign, explores how operators can use AI, modern cooling, and cleaner power to balance rising compute demand with genuine sustainability progress.

The swift integration of AI in sectors like healthcare and manufacturing has only increased pressure on data centre infrastructure. Energy consumption is high from the outset when training large models, remaining vast after deployment due to inference cycles.

The steady demand for AI adds persistent pressure on facilities. Already, data centres are moving workloads into large-scale cloud platforms (hyperscale) or mixed (hybrid) cloud set-ups. As more activity becomes centralised, power and cooling demands in these facilities grow. This prompts operators to identify solutions that support expansion while meeting sustainability goals.

The search for innovation

Innovative thermal and power handling strategies serve as one answer for operators. These advanced methods act in parallel to align environmental efficiency with increased compute density.

Liquid cooling, for example, which was previously associated with high-performance computing (HPC) deployments, is now used broadly across facilities in thermal management for high-density racks. Systems built with dielectric fluids or direct-to-chip water channels move heat more efficiently than air cooling systems, allowing for higher rack densities and a reduced burden on traditional air-handling units. These methods also make it possible to build compact facilities that need fewer mechanical parts compared to their counterparts.

AI also operates counterproductively, driving much of the sector’s energy demand. Yet, it also supports new smarter thermal controls which help to stabilise conditions and reduce energy consumption in dense compute zones. AI-driven cooling interprets sensor data to adjust environmental conditions in real-time, especially as workload intensity picks up. This approach reduces unnecessary cooling activity, allowing for precise environmental control across the facility, which is especially valuable for AI training zones that experience rapid shifts in thermal loads.

Sustainability gains also hinge on cleaner power sourcing. Power Purchase Agreements (PPAs) support operators in switching to renewable energy sources like solar, wind and hydropower, becoming popular investments to future-proof facilities. Some data centres are now built alongside renewable assets to cut transmission losses and gain clearer insight into their electricity’s carbon profile.

Alongside these strategies, interest has grown in on-site microgrids, battery energy storage systems (BESS) and hydrogen fuel cells. Such innovations provide cleaner power that lowers dependence on legacy grids powered by fossil fuels. But these solutions do not guarantee long-term scalability and viable costs, making them harder to access, especially for smaller organisations with less land and capital compared to hyperscale providers.

Driving change through cloud

Major cloud companies also influence sustainability efforts across the sector. Microsoft and Amazon Web Services operate at a scale that places them among the world’s largest electricity users, but it also positions them as prominent low-carbon advocates. Their procurement models, certification pathways and carbon neutrality commitments are setting expectations across the sector, for both colocation partners and new policy discussions.

These providers encourage transparency and accountability through open-source design work and shared framework promotion. Efforts like carbon-aware computing, where workloads shift to periods or regions with cleaner energy, indicate a move towards more digital infrastructure tuned for sustainable performance.

However, this progress from hyperscalers emphasises a divide across the industry. Since larger businesses secure large renewable energy agreements and invest in specialised cooling systems at a pace smaller businesses cannot match, sustainability becomes a competitive differentiator, rather than a common baseline to aim for.

Legacy over longevity

Progress towards a sustainable future and data centre expansion is also limited by the existing infrastructure. Many regions operate with legacy grids that are not equipped to support current growth patterns, and new grids face installation delays because of regulatory processes or aging network capacity. Such constraints have already led to development delays in countries like Ireland, where the gap between digital expansion and physical systems lies exposed.

Financial pressures also shape progress. While green technologies offer lower long-term expenses, the upfront spend for retrofits and renewable power agreements or advanced cooling is high. Cost differentiators are most notable in regions that focus on price/performance when making procurement decisions, because operators in these markets work with tight margins and limited incentives which outweigh the long-term gains.

These global differences only increase friction. Regions with cheaper carbon-intensive electricity or limited regulatory policies see fewer reasons to commit to sustainable upgrades, which produces uneven progress instead of a unified movement within the sector.

What comes next?

A future driven by green data centres depends on coordinated progress. Utility managers and grid operators need plans aligned with policymakers, and manufacturers that work closely with cloud architects to ensure data centres can grow while minimising environmental impact.

Innovations must also coexist with these changes. Hardware and facility design must be combined with software that can steer workloads, with increased value placed on accurate responses to environmental conditions. Demand for AI in these scenarios will increase, however it also offers tools that will support more efficient energy use and flexible load management.

We need green data centres for a digital economy aiming to grow without intensifying climate pressures. It’s a multifaceted route to sustainable development, but with shared commitment and targeted design and innovation, operators are given a realistic way forward.

❌