Normal view

Received today — 2 April 2026

Surya Roshni Lights Up Indhana Bhawan with Advanced Façade Lighting

Surya Roshni Limited, one of India’s most trusted names in lighting, wires & cables, fans, home appliances and water pumps, with the widely recognised ‘Prakash Surya’ brand across water tanks, PVC pipes and steel pipes, has successfully executed the façade lighting for Indhana Bhawan, the headquarters of Karnataka Power Transmission Corporation Limited (KPTCL), reinforcing its […]

Turning Conversation into Action: Nomad Futurist Foundation at DCD>Connect | New York

1 April 2026 at 14:00

Originally posted on Nomad Futurist.

At DCD>Connect | New York, the Nomad Futurist Foundation didn’t just participate in the conversation about building the future workforce — we demonstrated what it looks like to actively create it.

Through two milestone moments, we brought together today’s leaders and tomorrow’s innovators, proving that meaningful change in the digital infrastructure industry happens when ideas are backed by action.

Mana Hui: Aligning Leaders Around a Shared Mission 

After Day 1 of the conference, we gathered some of the industry’s most forward-thinking voices at the rooftop of The Knickerbocker Hotel for our Mana Hui: Leaders Connect Networking Event.

More than a networking reception, Mana Hui created a dedicated space for leaders to come together around a shared purpose: how we can collectively inspire, educate, and open doors for the next generation of digital infrastructure talent.

Conversations focused on tangible solutions, from increasing visibility into career pathways, to strengthening mentorship opportunities, to ensuring students and early-career professionals understand the real-world impact of this industry. The room was filled with decision-makers, innovators, and advocates aligned around one idea: preparing the future workforce is not a side initiative; it is a responsibility.

Mana Hui set the tone by reinforcing the power of collaboration. When leaders unite with intention, momentum builds, and that momentum must translate into action.

Powering the Next Generation: From Conversation to Impact 

On Day 2, that momentum became measurable impact through our Powering the Next Generation Student Workshop.

Students and emerging professionals joined us for an experience designed not just to inform, but to connect. Industry leaders shared authentic stories about their career journeys, including challenges, pivots, and lessons learned, providing students with transparent insight into opportunities across the digital infrastructure landscape.

Rather than a traditional panel format, the workshop fostered dynamic dialogue. Students actively engaged, asked thoughtful questions, and contributed their own perspectives, creating an environment rooted in collaboration and curiosity.

A defining highlight came when a group of students from New York University presented a live demonstration of one of their own projects, offering a powerful reminder that the next generation is not waiting for opportunity. They are already building the future.

We were proud to welcome students representing an exceptional range of institutions, including Harvard Law School, Columbia University, Cornell University, Dartmouth College, University of Notre Dame, Stevens Institute of Technology, and more. Many of these students are preparing to enter the workforce within months and are eager to contribute meaningfully to the industry.

Following the workshop, members of the Nomad leadership team continued the experience with a visit to the iconic 60 Hudson Street building for a tour of the NYI and Hudson Interxchange facilities, led by Ambassador Arthur Valhuerdi. For even some of our own members, it was their first time inside a live data center environment, making it a meaningful extension of the day’s learning and a powerful reminder of the infrastructure behind the digital world.

To continue reading, please click here.

The post Turning Conversation into Action: Nomad Futurist Foundation at DCD>Connect | New York appeared first on Data Center POST.

Data Governance and Clinical Innovation

31 March 2026 at 13:00

Artificial intelligence is a tool designed to power innovation, but it’s important to understand its primary fuel: data. Data is required not only for the outputs of AI algorithms but also for their training and operation. Because of this, in sectors where innovation has become driven by technologies like artificial intelligence, data has essentially become fuel for innovation, and it’s important to ensure the safety and quality of this data to stimulate it.

Understandably, many critics have expressed concern over the use of artificial intelligence in healthcare settings, considering the private, sensitive nature of the data used in the field. Patient personal information is not only highly sensitive but also protected by law, meaning there are strict regulations and guidelines dictating how entities in healthcare can use artificial intelligence with regard to patient data.

Why strong data governance is essential for AI in healthcare

However, that doesn’t mean artificial intelligence shouldn’t be used in healthcare whatsoever. Instead, it means there is a need for strong data governance, as this is an essential step in enabling safe and ethical AI use in any industry, particularly ones such as healthcare where the stakes are high. In addition to ensuring compliance with any applicable regulations, strong data governance helps create greater transparency and trust that inspires patient confidence.

It’s important to remember the reason why the healthcare sector wants to deploy artificial intelligence technology in the first place: AI can accelerate innovation and lead to improved patient outcomes. For example, innovators in the healthcare industry have used AI to accelerate drug discovery, conduct more accurate diagnostics, and streamline operations in a way that significantly improves efficiency. But to achieve these outcomes, systems must have access to accurate, well-managed data.

The key to this is creating compliance frameworks that reduce and mitigate the risks of artificial intelligence while still supporting scalable healthcare solutions. Of course, the core of any compliance framework in healthcare is data security and privacy, but these guidelines can also help control other risks, such as algorithmic bias and “black box” risks, ensuring that all decisions and recommendations made by an artificial intelligence are fair and explainable.

Enabling the responsible deployment of AI in healthcare

Ultimately, data governance isn’t about gatekeeping but about collaboration and enabling the responsible and ethical deployment of artificial intelligence. The mindset with which we approach AI shouldn’t be about limiting how we can use the technology, but instead how we can facilitate its use in a way that does not compromise data integrity or patient privacy.

Right now, the key goal of healthcare practitioners who hope to implement artificial intelligence should be to build trust and reliability in these systems. The steps required to achieve this include ensuring data quality and diversity, maintaining transparent communication, and continuous monitoring and validation.

The best way to look at AI systems in healthcare is as an analog to human employees. In healthcare, not even human employees have unfettered access to patient data. There are access controls based on the level of access an individual needs, with checks and balances and supervisory control.

The same philosophy should apply to autonomous systems. Just as approvals and access controls are required of human employees, so too should AI systems require approvals from human overseers.

Indeed, there is a world in which artificial intelligence can revolutionize the healthcare industry for the better, alleviating some of the burden on healthcare workers and contributing to improved patient outcomes. However, for this to happen, the adoption of AI must be done in a way that is responsible and ethical. With this mindset, prioritizing strong data governance, AI can become a reliable partner in patient care.

# # #

About the Author

Chris Hutchins serves as the founder and CEO of Hutchins Data Strategy Consulting. The healthcare institutions benefit from his expertise in developing scalable moral data and artificial intelligence methods to maximize their data potential. His areas of expertise include enterprise data governance, responsible AI adoption, and self-service analytics. His expertise helps organizations achieve substantial results through technology implementation. Through team empowerment, Chris assists healthcare leaders in enhancing care delivery while reducing administrative work and transforming data into meaningful outcomes.

The post Data Governance and Clinical Innovation appeared first on Data Center POST.

Celebrating International Data Center Day: Inspiring the Next Generation of Tech Leaders

26 March 2026 at 16:00

Originally posted on Nomad Futurist.

Happy International Data Center Day! Today, we shine a spotlight on an industry that quietly powers our modern world. Behind every video call, online class, cloud application, and AI breakthrough is a network of infrastructure that most people never see — but rely on every single day: the data center industry.

This day is about more than celebrating technology; it’s about celebrating the people who make it all possible. From engineers and technicians to sustainability leaders, network specialists, and innovators, data centers are driven by talented professionals shaping the future of technology and connectivity.

Yet, one of the biggest challenges remains awareness. Many students and educators still don’t know that these careers exist, or the incredible opportunities they offer.

At the Nomad Futurist Foundation, we know that exposure changes everything. When students step inside a data center, meet the people behind the operations, and see the technology up close, curiosity transforms into possibility. Experiencing these environments firsthand opens doors to careers that are not only in high demand but essential to powering our digital future.

To continue reading, please click here.

The post Celebrating International Data Center Day: Inspiring the Next Generation of Tech Leaders appeared first on Data Center POST.

The 1 Gigawatt Data Center Dilemma

26 March 2026 at 15:00

The AI revolution is pushing the data center industry toward gigawatt-scale campuses. But the real question today is not how large a facility can be built. The real question is how quickly power can be converted into revenue.

Consider a 1 gigawatt data center project. One gigawatt equals one thousand megawatts of capacity. In today’s market, typical infrastructure costs for large data centers range between 8 million and 12 million dollars per megawatt for standard facilities. That places the infrastructure cost of a 1 GW campus between 8 billion and 12 billion dollars.

In many U.S. markets, developers are seeing costs closer to 10 to 14 million dollars per megawatt, which would place a 1 GW campus between 10 and 14 billion dollars. AI optimized data centers can be even more expensive due to high density racks, liquid cooling systems, and larger electrical infrastructure. Those facilities can reach 15 to 20 million dollars per megawatt, pushing a 1 GW campus to 15 to 20 billion dollars in infrastructure alone.

Once servers, GPUs, networking equipment, and storage are installed, the total project value can easily exceed 30 billion dollars. But capital cost is no longer the biggest constraint, energy is.

According to the International Energy Agency, global data center electricity consumption reached roughly 415 terawatt hours in 2024, representing about 1.5 percent of global electricity demand. That number is projected to approach 800 terawatt hours by 2030 as AI adoption accelerates. At the same time, power infrastructure is struggling to keep up. The United States interconnection queue alone now exceeds 2 terawatts of generation capacity waiting for approval, and in many regions new grid connections can take three to six years. This creates a major financial challenge for traditional hyperscale development.

Large buildings are often constructed years before sufficient power becomes available. Hundreds of megawatts of capacity can sit idle while developers wait for substations, transmission lines, and utility upgrades. On a one gigawatt campus that could mean billions of dollars tied up in infrastructure waiting for power.

Now compare that with a modular campus strategy.

Instead of constructing massive buildings designed for the full gigawatt from day one, the campus can be deployed incrementally as power becomes available. A one gigawatt campus could begin with a 20 megawatt deployment. Using the same industry pricing ranges, that first deployment would require between 160 and 240 million dollars at eight to twelve million dollars per megawatt, or up to 300 to 400 million dollars if the facility is designed for high density AI workloads. What makes this model powerful is how quickly revenue can begin.

In many markets AI capacity is leasing between 150 thousand and 250 thousand dollars per megawatt per month depending on location and density. A 20 megawatt deployment can therefore generate roughly 3 to 5 million dollars per month, or approximately 36 to 60 million dollars per year, while the rest of the campus continues expanding. Instead of waiting years for a massive hyperscale facility to be completed, the project can begin generating revenue within 12 to 18 months.

As additional power becomes available the campus grows from twenty megawatts to one hundred megawatts, then several hundred megawatts, and eventually the full one gigawatt capacity. By the time the campus reaches full scale, the project may already be generating hundreds of millions of dollars annually.

There is also another strategic advantage that is becoming increasingly important: mobility of infrastructure.

If power availability changes, new energy sources come online, or grid constraints shift to another region, modular facilities can be redeployed where energy exists. Massive fixed hyperscale buildings cannot move.

This dramatically changes the risk profile.

Traditional hyperscale development concentrates 10 to 20 billion dollars into a single permanent structure. Modular campuses distribute capital across infrastructure that scales directly with available power.

In a world where energy has become the limiting factor for digital growth, the future of hyperscale development may not be one giant building. It may be gigawatt scale campuses built from modular infrastructure designed to grow with power.

# # #

About the Author

Kliton Agolli Co-Founder, Board Member & Director of Global Growth Northstar Technologies Group | Naples, Florida.

Kliton Agolli is a senior security and international business development executive with more than 35 years of experience operating at the intersection of national security, executive protection, counterintelligence, and global commercial expansion. His career spans military service, law enforcement, VIP and diplomatic protection, healthcare and hospitality security, and cross-border business development in complex and high-risk environments.

At Northstar Technologies Group, Mr. Agolli leads global growth strategy, international partnerships, and strategic market expansion. He plays a key role in aligning advanced security and infrastructure technologies with government, defense, healthcare, and mission-critical commercial clients worldwide. His work focuses on risk-informed growth, regulatory compliance, and building long-term strategic alliances across Europe, the Middle East, and the United States.

The post The 1 Gigawatt Data Center Dilemma appeared first on Data Center POST.

Data Center HVAC Market to Surpass USD 36 Billion by 2035

19 March 2026 at 13:00

The global data center HVAC market was valued at USD 13.7 billion in 2025 and is estimated to grow at a CAGR of 9.8% to reach USD 36 billion by 2035, according to recent report by Global Market Insights Inc.

Growth in the global data center HVAC industry is being fueled by rising computing intensity, expanding AI-driven workloads, and the continued development of hyperscale and enterprise facilities. As server densities increase and high-performance computing environments generate greater thermal loads, advanced cooling infrastructure has become essential to maintain operational stability and uptime. Research and development efforts across the HVAC industry are increasingly focused on liquid cooling technologies and next-generation thermal management systems capable of handling elevated power densities.

At the same time, stricter regulatory oversight related to energy consumption and environmental performance is encouraging operators to enhance system efficiency and reduce carbon output. ESG-focused initiatives and net-zero commitments are prompting facility upgrades aimed at optimizing Power Usage Effectiveness and lowering operating expenses. Improvements in airflow engineering, adoption of sustainable refrigerants, and integration of energy-efficient cooling architectures are reshaping infrastructure strategies. As regulatory expectations and energy costs continue to rise, demand for intelligent, high-efficiency HVAC solutions in data centers is expected to accelerate significantly.

Rising load capacities, sustainability targets, and regulatory compliance requirements are creating pressure for compact, scalable, and adaptable HVAC systems. Industry participants are responding by designing modular cooling platforms that can operate effectively across diverse geographies while maximizing space utilization and energy performance.

The data center HVAC market from solutions segment accounted for 76% share in 2025 and is forecast to grow at a CAGR of 8.9% from 2026 to 2035. Advanced monitoring tools equipped with artificial intelligence enable predictive maintenance, improve airflow management, and reduce unnecessary power consumption. Increased adoption of liquid-based cooling technologies is supporting high-density server environments while enhancing reliability and extending equipment lifespan through energy-conscious design.

The air-based cooling technologies segment held a 50% share in 2025 and is projected to grow at a CAGR of 8.8% during 2026-2035. Enhanced airflow optimization systems, variable-speed fan configurations, and intelligent environmental controls are improving thermal consistency and minimizing energy waste. Economizer-enabled designs are facilitating greater use of ambient air, while modular cooling units support scalability across both hyperscale and edge environments. Growing server power density is also accelerating interest in direct cooling and immersion-based methods supported by advanced coolant formulations that enhance heat transfer efficiency.

United States data center HVAC market reached USD 4.7 billion in 2025. Increasing cloud integration and AI-intensive applications are driving demand for more efficient cooling architectures. Investments are being supported by electrification incentives and decarbonization initiatives, encouraging broader adoption of intelligent HVAC controls and energy-optimized systems. Integration with smart building platforms and grid-responsive technologies is enabling facilities to manage peak loads, reduce demand charges, and incorporate renewable energy sources.

Key companies operating in the global data center HVAC market include Vertiv, Schneider Electric, Carrier Global, Daikin Industries, Trane Technologies, Johnson Controls, STULZ, Alfa Laval, Danfoss, and Modine Manufacturing. Companies in the global market are strengthening their competitive position through continuous innovation, strategic partnerships, and geographic expansion. Leading players are investing heavily in research and development to enhance liquid cooling efficiency, improve airflow intelligence, and integrate AI-driven monitoring systems. Collaborations with cloud service providers and data center developers are enabling customized cooling deployments for high-density environments. Firms are also expanding manufacturing capacity and regional service networks to support rapid infrastructure growth. Sustainability-focused product development, including low-global-warming-potential refrigerants and energy-efficient system architectures, is becoming a central competitive differentiator.

The post Data Center HVAC Market to Surpass USD 36 Billion by 2035 appeared first on Data Center POST.

Why Effective Facilities Management Is Essential for Today’s Data Centre Operators

27 February 2026 at 16:00

Originally posted on Datalec LTD.

In a digital economy where uptime is non-negotiable, effective critical facilities management (FM) is becoming a primary lever for managing outage risk in high‑density, AI‑driven data centres. As infrastructure grows more complex and AI-driven compute places unprecedented strain on power and cooling systems, operators face escalating risks, making the cost of getting FM wrong higher than ever.

Evolving Pressures, Escalating Risks: The New Reality for Data Centre Operators

Despite steady year-on-year improvements in resilience, the industry continues to operate under significant pressure. According to Uptime Institute’s 2025 Outage Analysis, outages are occurring less frequently but are becoming more complex and more expensive when they do happen. Power-related failures remain the leading cause of impactful incidents, accounting for 54% of major outages, while 53% of operators reported at least one outage in the past three years, even as overall rates decline.

This challenge is amplified by the rapid rise of AI and the high-density compute requirements. AI workloads are now “straining existing infrastructure, especially around power and cooling,” creating new categories of risk that simply didn’t exist a decade ago. Staffing shortages across the sector add further pressure, reducing the availability of experienced professionals capable of managing mission-critical environments.

The financial implications are equally significant. More than 54% of organisations reported that their most recent outage exceeded $100,000 in cost, and 20% experienced losses above $1 million. For large enterprises, downtime can reach $540,000 to well over $1 million per hour, depending on sector and workload criticality.

This is the operating landscape that data centre leaders must now navigate, where even small procedural missteps can cascade into business-critical failures.

To continue reading, please click here.

The post Why Effective Facilities Management Is Essential for Today’s Data Centre Operators appeared first on Data Center POST.

The Construction Industry Has a $500 Billion Problem — And AI Is Finally Ready to Solve It

19 February 2026 at 15:30

A veteran forensic consultant’s patent-pending platform is exposing the hidden scheduling failures that silently destroy value across every major infrastructure project in America.

Every year, billions of dollars in construction value are destroyed; not by bad materials, not by incompetent workers, not even by unforeseen site conditions. They are destroyed by scheduling failures that nobody caught in time.

Ricardo Hinojos has spent more than two decades in the field, from underground utilities to hyperscale data centers serving some of the most demanding clients on the planet. In project after project, he kept seeing the same quiet crisis: schedules that looked clean on paper but were riddled with deficiencies invisible to the human eye — missing logic ties, resource conflicts, unrealistic durations, and cascading risks that would not surface until millions of dollars were already committed.

The industry accepted this as normal. Mr. Hinojos refused to.

The Data Tells a Brutal Story

In forensic work analyzing over $3.2 billion in construction projects, RHSS found that more than 70 percent of construction schedules contain critical deficiencies; errors significant enough to compromise project delivery, inflate costs, and expose owners to litigation. These are not minor formatting issues. These are logic gaps that cause downstream collapse. Duration assumptions that defy physics. Resource allocations that exist only on paper.

For hyperscale data center construction, where a single day of delay can cost hundreds of thousands of dollars in lost revenue, this is not merely an operational problem. It is a financial crisis in slow motion. Amazon, Google, Microsoft, and the broader hyperscale ecosystem are racing to bring capacity online against unprecedented demand, with the margin for error shrinking every quarter.

Why Traditional Scheduling Tools Are Not Enough

Primavera P6. Microsoft Project. Oracle. These are powerful platforms. But they are instruments, not intelligence. They record what you tell them. They do not question whether what you told them is right. The gap between a schedule that looks compliant and one that is defensible has always required a seasoned expert to bridge. Until now, that meant expensive consultants, weeks of review, and subjective judgment calls that did not always hold up in court or in client meetings.

The construction industry has been waiting, perhaps without realizing it, for something categorically better.

A New Paradigm: Predictive Schedule Intelligence

The platform developed by Ricardo Hinojos Scheduling Solutions represents what the firm calls predictive schedule intelligence, an AI-powered validation system purpose-built for the complexity of hyperscale infrastructure projects. The patent-pending system achieves 91 percent accuracy in identifying schedule deficiencies before they become field problems. It does not merely flag errors; it predicts cascading impacts, generates litigation-grade documentation, and produces defensible forensic analysis at a fraction of the time and cost of traditional methods.

“This is not a bolt-on feature for an existing platform,” Mr. Hinojos said. “It is a ground-up rethinking of how construction intelligence should work. The industry has accepted preventable failure for too long.”

What RHSS Delivers

  • Automated Schedule Validation: Quality checks against DCMA 14-Point analysis, contract specifications, and industry standards, completed in hours rather than weeks.
  • AI-Driven Resource Loading: Manpower forecasting and crew productivity analysis tied to real-world RS Means labor data across all construction disciplines.
  • Forensic Delay Analysis: Court-ready documentation and defensible delay analysis built to withstand litigation, arbitration, and regulatory scrutiny.
  • Earned Value Integration: Real-time project health visibility through EVM metrics calibrated for hyperscale data center construction workflows.

The Bigger Picture

We are entering an era where the organizations that build the fastest, most reliably, and most cost-effectively will not simply be the ones with the best labor or the best materials. They will be the ones with the best intelligence systems. RHSS was built precisely for this moment, and for the clients, partners, and technology companies that recognize what is at stake in the race to deliver the infrastructure that powers the modern economy.

# # #

About the Author

Ricardo Hinojos is a Certified Forensic Construction Consultant (CFCC) with 20+ years in construction project management and forensic consulting. He specializes in hyperscale data center construction scheduling, forensic delay analysis, and AI-powered project intelligence. He holds a patent-pending AI schedule validation system achieving 91% accuracy across $3.2 billion in analyzed projects and serves as an expert witness in construction delay litigation and arbitration.

The post The Construction Industry Has a $500 Billion Problem — And AI Is Finally Ready to Solve It appeared first on Data Center POST.

Rethinking Data Center Construction In The AI Era – The QTS Experience Podcast

16 February 2026 at 21:00

Originally posted on Compu Dynamics.

The data center industry is entering a new phase — one defined less by generic flexibility and more by purpose-built design. For years, operators relied on large, adaptable white-space shells to support a wide range of workloads. That model served the cloud era well. But the rise of AI and high-density computing is reshaping infrastructure requirements, pushing the industry toward more integrated, modular, and performance-driven environments.

In a recent QTS podcast with David McCallSteve Altizer, CEO of Compu Dynamics, shares his perspective on how prefabrication and modular white-space design are becoming foundational to building data centers ready for the AI era.

Why the White Space Is the New Frontier for Modular Innovation

As AI workloads push power density to new extremes, long-standing assumptions about how data centers are designed and built are being challenged. White space, once treated as a static and custom-built environment, is rapidly becoming the next frontier for modular innovation.

Why Density Changes Everything

AI workloads aren’t just hotter, they’re architecturally different. When you’re deploying GPU arrays that demand 100kW per rack today and 600kW tomorrow, you’re not simply installing servers; you’re building a machine. The sheer volume of structural steel, high-pressure liquid cooling pipes, power distribution, and network infrastructure required to support these dense deployments creates an entirely new opportunity: factory assembly.

Traditional cloud data centers were too light and airy to justify prefabrication – components would literally fall apart in transit. But modern AI infrastructure is robust, dense, and highly engineered. It’s perfect for modular construction. Think of it as building a motherboard rather than a room. Every element – power, cooling, network – works in precise coordination to support the chips doing the computational work.

To continue reading, please click here.

The post Rethinking Data Center Construction In The AI Era – The QTS Experience Podcast appeared first on Data Center POST.

Received before yesterday

Campaigners mount last-ditch bid to block data centre at former RBS HQ

By:DCR
2 February 2026 at 13:34

Campaigners have launched a last-ditch effort to block a proposed data centre in Edinburgh, warning that the development’s back-up power plans could mean ‘100,000 idling cars-worth of diesel’ being burned if generators were ever run at scale.

Action to Protect Rural Scotland (APRS) published research ahead of a meeting this week where Edinburgh City Council is due to consider planning permission in principle for a green data centre at Redheughs Avenue in the Gyle area – which is the site of RBS’ former HQ. 

Edinburgh Council planning officers have recommended the project for approval, but campaigners have warned that the application does not take into account the full environmental impact of the site. 

Kat Jones, a Director at Action to Protect Rural Scotland, noted, “There is so much information missing from the application documents about the environmental impacts of this development. 

“The data centre will draw 210MW from the grid, which would power a quarter of a million homes, so a few low energy lighting solutions are neither here nor there. And that’s before we even start talking about the diesel generators.”

“If there were medals for greenwashing then these data centre developers are Olympic-level. The claims from the developer that this is a green data centre are obviously bunkum.”

“Diesel generators need to be testing and when you look at what is happening in the US, diesel generators are being used more as the grid becomes under pressure from the demand from datacentres due to their astronomical energy demands”

“This site is just upwind of the city centre, close to residential homes, and 220m from a Nursery. This is not something that should be happening with so little oversight – and without being required to do an Environmental Impact Assessment.”

The framing around the diesel generators can certainly sound scary – but as many in the data centre industry will know, they are primarily there as emergency back-up, rather than a primary power source. In fact, one of the key reasons that Scotland is being considered as an attractive location to site data centres is because of the high availability of power coming from renewable generation in the area. Scotland already produces much more power than it consumes, and data centres are hoping to tap into that surplus, and even potentially reduce the amount of curtailment that is required when the country is producing too much electricity. 

That doesn’t mean the backup generators won’t ever be required – grid issues can and do happen – but that can happen at many sites, whether it’s a hospital, factory or warehouse, and isn’t exclusive to data centres. Scotland also may have an abundance of power, but it still needs its grid reinforcing if it’s to use more of that power, something SSE and National Grid are keen to deliver.

But Dr Jones is keen to stress that data centres have already shown more regular use of diesel backup generators than other sectors, noting, “When you look at what is happening in the US, diesel generators are being used more as the grid becomes under pressure from the demand from data centres due to their astronomical energy demands.

“This site is just upwind of the city centre, close to residential homes, and 220 metres from a nursery. This is not something that should be happening with so little oversight – and without being required to do an environmental impact assessment.”

While Dr Jones is not wrong, it’s not exactly the same situation. US markets cited in these debates often face acute, localised capacity constraints and commercial incentives that can normalise generator operation beyond rare emergencies; Scotland’s system challenges are different. That doesn’t remove the central planning question – what happens if generators run more frequently than residents expect, and what conditions or assessments are in place to manage that risk? That will be up to Edinburgh City Council.

What the proposals actually entail

The proposed data centre would sit on the former campus of Royal Bank of Scotland, a large office complex originally constructed in the early 1990s and later demolished after staff relocation. Shelborn Asset Management bought the site in 2021, and the original buildings were demolished in 2022 following NatWest Group staff moving to Gogarburn. 

The developer later pivoted away from office-led plans and consulted on a campus featuring two data centre buildings of different sizes and a new on-site substation – which could help with power capacity and ensure the back-up generators remain turned off.

When planning officers assessed the site, they noted that the plans had ‘regard to the global climate and nature crises through re-use of brownfield land in a sustainable location’, while also adding that ‘it is not considered that the proposal will have a significant effect on the environment’. 

For councillors, the decision is whether to accept officers’ recommendation and approve the scheme in principle, or whether the questions raised over diesel back-up, local impacts and the absence of a formal environmental impact assessment justify holding the project back. We’ll find out when the council’s planning committee meets on Wednesday, February 4.

DCR Predicts: Can data centres become ‘good neighbours’ in 2026?

2 February 2026 at 08:18

Gareth Williams, Director, UK, India, Middle East and Africa Data Centres and Technology Leader at Arup, argues that 2026 should be the turning point for designing facilities that stabilise grids, steward water, and deliver visible community benefits.

2026 marks a pivotal opportunity to transform how data centres are seen in the public eye. Much has been done to change perceptions from anonymous ‘black boxes’ into strategic assets. Now we must ensure they are seen as positive partners for local energy, water and communities.

That means designing for reciprocity: centres that not only consume, but also stabilise grids, steward scarce water, create jobs, share heat, and leave biodiversity richer than before. This is what I see in briefs for clients, planners and operators alike: putting community benefit at the heart of developments, not as an afterthought.

Energy: from load to flexible, clean, locally useful power

AI-centric workloads are driving volatile, high-density demand, making efficiency gains harder. This is forcing smarter energy strategies, from chip-level liquid cooling and rack-level heat recovery to intelligent workload management.

We will increasingly see data centres act as energy hubs, with co-located renewables, multi-hour batteries, combined heat and power systems, and grid-service participation (frequency response, demand shifting) from day one. Pilot policies already treat facilities as grid allies, including heat-reuse quotas and flexible-access contracts. Operating models will increasingly shift compute to areas with surplus wind and sun — an approach that could also route non-time-critical training to regions with surplus energy.

Baseload energy supply options will mature unevenly. Some operators are testing power purchase agreements linked to small modular reactors to accelerate capacity. Others will combine hydrogen fuel cells for peak resilience with smart microgrids and local renewables. Regardless, the key is to offer two-way benefits: better uptime for operators and measurable support for national grid stability.

Water: design for scarcity, stewardship and circularity

Cooling demand will keep rising with denser compute. This can shift demand in some cases from air to liquid solutions, but the next step is water stewardship by design: closed-loop systems, immersion cooling where appropriate, and zero-freshwater ambitions in stressed catchments.

The Climate Neutral Data Centre Pact points to a water usage efficiency trajectory from ~1.8 L/kWh to 0.4 L/kWh in water-stressed sites by 2040. This is ambitious, but achievable if we switch to non-potable sources and track upstream and downstream impacts.

Practical levers for 2026 include site-level greywater reuse, recycled/industrial ‘brackish’ water sources, rainwater harvesting with sponge landscapes, and seawater cooling at coastal hubs — where environmental permissions and biodiversity management are designed from the outset. Singapore’s Green Data Centre Roadmap shows how regulation can drive cooling tower efficiency upgrades, blowdown recycling and cycles-of-concentration improvements that cut freshwater withdrawals at scale.

Community engagement: early, transparent, beneficial

Engagement still starts too late on many projects. Flip the sequence: begin with benefits, then shape the scheme around agreed outcomes. Practical packages include renewable partnerships that share surplus power; reuse district heat; build biodiversity corridors and accessible green space; offer fibre upgrades that lift local connectivity; and provide STEM education funding and jobs for technicians and landscapers.

Community-first design de-risks approvals and earns trust. These aren’t gestures; they increase value over the life of the campus. This ‘good neighbour’ lens is the fastest way to retire the ‘black box’ image and demonstrate tangible contributions to people’s lives.

Technology: intelligent management, edge resilience, advanced cooling

AI already plays a crucial role in enhancing operations, and it’s only getting smarter. One example is Digital Realty’s collaboration with Ecolab, which identifies real-time operational inefficiencies in cooling systems and recommends improvements to conserve water.

AI-powered management will become the operating system of next-generation facilities, actively orchestrating workloads, power and cooling to maximise efficiency. Intelligent monitoring will drive automation for predictive maintenance, spotting deteriorating components early and scheduling interventions without disrupting SLAs.

At campus scale, hyperscale modular architecture (standardised power and cooling blocks with repeatable controls) will enable capacity expansion and help manage AI surges. And at rack level, advanced liquid cooling systems (direct-to-chip and rear-door heat exchangers) will integrate with smart controls to maximise performance while minimising power and water use.

Materials: low-carbon, modular, designed for circular recovery

Measuring whole-life carbon is vital to managing the sustainability of buildings and critical infrastructure, including data centres. The materials brief should be explicit: certified low-carbon or recycled steel, geopolymer concrete where feasible, and engineered timber for appropriate architectural elements and shading. Envelope design, daylighting and thoughtful material selection can cut operational and embodied impacts while improving working environments.

2026 will see increasing design for disassembly and recovery: standardised rack aisles, traceable components, and procurement that favours reclaimed metals and remanufactured cooling equipment. We should expect to link digital asset plans with physical asset lifecycle strategies, ensuring that refresh cycles trigger material recovery instead of waste.

Acceleration: scale fast, standardise what matters, customise what counts

Large, out-of-town campuses with repeatable, prefabricated/containerised solutions are the only way to match AI demand responsibly. To make this happen, owners and operators will need to standardise the backbone (power blocks, cooling modules, monitoring stacks), then customise for local energy and water contexts.

Reduced bespoke engineering means faster approvals, lower risk, and clearer community commitments (heat and water reuse, biodiversity) baked into template designs. Energy policies that treat campuses as anchor tenants and reward flexibility services will further cut delivery timelines while raising public value.

Conclusion: a systems brief

This is the year to design data centres as reciprocal systems: energy hubs that stabilise grids and disclose 24/7 clean sourcing; water stewards that minimise freshwater draw and close loops; and neighbours that fund skills, share heat, and leave landscapes better than before.

With multidisciplinary teams and a place-first brief, owners and operators can move from compliance to contribution — engineering facilities that are engines of local resilience and global compute. If we build them this way, the sector will be remembered not for what it consumed, but for what it enabled.

This article is part of our DCR Predicts 2026 series. The series has now offficially concluded, you can catch all the articles at the link below.

DCR Predicts 2026

Rising Silver Prices Push Solar Industry to Rethink Materials and Reduce Dependence – EQ

In Short : Soaring silver prices are creating cost pressures for solar manufacturers, prompting efforts to reduce or replace silver usage in photovoltaic technologies. As silver is a critical input for solar cells, companies are exploring alternative materials, efficiency improvements, and new manufacturing processes to control costs while maintaining performance and supporting large-scale solar deployment.

In Detail : The sharp rise in global silver prices has become a growing concern for the solar industry, as silver is a key raw material used in photovoltaic cell manufacturing. Solar firms are increasingly facing higher production costs, which could impact project economics, equipment pricing, and long-term profitability if material dependency is not addressed.

Silver is primarily used in the conductive paste that forms electrical contacts in solar cells, enabling efficient flow of electricity. Although the amount of silver per cell has reduced over the years through technological improvements, the scale of global solar deployment means overall demand for silver continues to rise significantly.

With silver prices reaching multi-year highs, manufacturers are under pressure to optimize material usage. Rising input costs can reduce margins for module producers and increase capital expenditure for solar developers, particularly in price-sensitive markets where competitive tariffs leave little room for cost escalation.

To manage these risks, solar companies are investing in research and development to reduce silver content in solar cells. Techniques such as thinner conductive lines, improved cell architectures, and more precise manufacturing processes are helping minimize silver usage without compromising electrical efficiency.

Some firms are also exploring alternative materials to partially or fully replace silver. Copper, aluminum, and other conductive metals are being tested as potential substitutes, although challenges remain in terms of durability, efficiency, corrosion resistance, and long-term performance under harsh operating conditions.

Technological innovation is playing a crucial role in this transition. Advanced cell designs such as TOPCon, heterojunction, and back-contact technologies allow more efficient use of conductive materials, enabling manufacturers to achieve higher power output with lower precious metal consumption.

From a strategic perspective, reducing silver dependence is also about long-term supply security. Silver is used across multiple industries, including electronics, electric vehicles, and investment markets, making it vulnerable to supply constraints and speculative price movements that can disrupt solar manufacturing plans.

Policy and market dynamics further influence this shift. As governments push for rapid renewable energy expansion, keeping solar affordable is essential for achieving climate targets. Material cost control becomes a critical factor in maintaining the competitiveness of solar power compared to other energy sources.

Overall, the solar industry’s efforts to cut or replace silver usage reflect a broader trend toward material efficiency and technological resilience. By reducing reliance on expensive and volatile inputs, solar manufacturers can protect project economics, strengthen supply chains, and ensure the continued scalability of solar energy in a rapidly evolving global energy landscape.

Saudi Arabia pivots NEOM ‘gigaproject’ to AI data centre hub

By:DCR
29 January 2026 at 15:47

Saudi Arabia is reportedly preparing to scale back NEOM, its marquee ‘gigaproject’ on the Red Sea, with it instead looking to develop an AI data centre hub instead.

According to unnamed sources cited by a report in the Financial Times, Saudi Arabia will scale back its hugely ambitious NEOM megaproject to create a new livable region in the desert in the northwest of the country, on the Red Sea coast. The project was announced in 2017 by Crown Prince Mohammad Bin Salman and was a cornerstone of his Vision 2030. It was to cover about 26,500 square km, roughly the size of Belgium (see map below).

The image above, from 27 October 2024, shows Sindalah, a luxury island destination and the first physical showcase of NEOM.

image

NEOM was due for completion in 2030 and included plans for a city called The Line – a row of 500m tall skyscrapers stretching for some 200km. However NEOM suffered many delays and cost overruns, as well as criticism for potential environmental damage and being unrealistic, among other things.

In addition, Saudi Arabia is hosting the Expo international trade fair in 2030 and the football World Cup in 2034, both of which involve large scale investment. Work on NEOM was paused in 2025 while the government looked at its options in a year-long review which is scheduled to conclude in this quarter.

According to a report in the Financial Times, focus for the region will be more on industry, such as becoming a hub for data centres. Its location means sea water can be used for cooling and the Crown Prince is keen to make his country a leader in AI infrastructure – a hub for data centres to power AI – to attract inward investment and high profile international partners.

An unnamed source cited by the FT said the location had other advantages too, such as digital infrastructure and its position at the crossroad of three continents (Africa, Asia and Europe), plus almost limitless renewable energy and available land.

It’s not the first time NEOM has been touted as potentially playing host to data centres, with DataVolt committing $5 billion DataVolt to develop a new 1.5 GW net zero AI campus at NEOM’s Oxagon. That was expected to come online in 2028, but it’s unknown if it’ll be impacted by the planned rethink for the NEOM area.

This article originally appeared on Mobile Europe, with additional commentary from Data Centre Review.

Lanarkshire becomes Scotland’s first AI Growth Zone, UK’s fifth

29 January 2026 at 14:40

Lanarkshire has been named the UK’s latest AI Growth Zone, with the UK Government backing a major expansion around DataVita’s data centre site in the area. 

This is the first AI Growth Zone located in Scotland, which has long been positioned as an ideal area to host one – given the abundance of renewable power that is available in the region. The Scottish Government has also been keen to promote the area in hopes of developing it into a leading zero-carbon, cost-competitive green data centre hub. 

The Lanarkshire AI Growth Zone, which is the fifth AIGZ to be announced, is set to be based around DataVita’s campus, with the Scottish data centre firm delivering the site in partnership with AI cloud provider CoreWeave. That’s slightly different from other sites, which have often been positioned around multiple data centre operators, such as the North East Growth Zone, which is being centred around expansions to existing campuses from Cobalt Park Data Centres and the QTS Cambois. 

Despite being centred around the one expanded campus, the UK Government still has big hopes for the site. In fact, it’s hoped that the site will bring more than 3,000 jobs to the area over the coming years, including 50 apprenticeships. Around 800 roles are expected to be higher-paid AI and digital infrastructure jobs, spanning everything from research and software to permanent staff running and maintaining data centres, with the remainder tied to construction and site development.

Alongside job creation, ministers are pointing to £8.2 billion of private investment, plus a community fund worth up to £543 million over the next 15 years, which the Government says will be raised as data centre capacity comes online.

What’s being built as part of the Lanarkshire AI Growth Zone

The Lanarkshire AI Growth Zone may be centred around DataVita and CoreWeave’s partnership, but that doesn’t mean it’s just a single facility. To the contrary, the site is expected to feature 100MW of AI-ready data centre capacity, over 1GW of renewable energy infrastructure connected via private wire, and ‘Innovation Parks’ intended to attract adjacent industries that want proximity to large-scale compute.

That extra power will be key to the deployment of this latest AI Growth Zone, with it seen as a key tenet of gaining the designation, but it should also go some way towards helping reduce public opposition. Another data centre located to the south of Glasgow in Hulford has seen intense local opposition due to its enormous power demands, with residents outraged that the site wouldn’t even need to calculate the environmental impact on the local area. 

DataVita and CoreWeave will be keen to avoid the same backlash – which is why the companies are leaning heavily on a whole host of sustainability claims for its Lanarkshire AI Growth Zone. As well as using renewable energy to help power the site, the two firms also plan to make use of waste heat. 

The current plan is that excess heat from cooling systems could, in time, be redirected to support the nearby University Hospital Monklands, described as Scotland’s first fully digital and net zero hospital – though that element is presented as something to be explored once the site is fully up and running, rather than a guaranteed near-term deliverable.

That would be a huge win for advocates of heat networks, with a recent report suggesting that waste heat from UK data centres could heat 3.5m+ homes – it could also help the site win favour with local residents who are impacted by the plans. 

It’s not the only part of the plan that has been developed in a bid to win over residents. In fact, a community fund – worth up to £543 million over 15 years – will also be set up to support local programmes ranging from skills and training packages through to after-school coding clubs and support for local charities and foodbanks. 

DataVita’s parent company, HFD Group, is also expected to contribute £1 million per year to local charities and community groups, on top of the Growth Zone community funding mechanism.

Industry reaction

Commenting on plans for the first AI Growth Zone in Scotland, the UK’s Technology Secretary Liz Kendall noted, “Today’s announcement is about creating good jobs, backing innovation and making sure the benefits AI will bring can be felt across the community – that’s how the UK government is delivering real change for the people of Scotland.

“From thousands of new jobs and billions in investment through to support for local people and their families, AI Growth Zones are bringing generation-defining opportunities to all corners of the country.”

Danny Quinn, Managing Director of DataVita, added, “Scotland has everything AI needs – the talent, the green energy, and now the infrastructure. But this goes beyond the physical build. We’re creating innovation parks, new energy infrastructure, and attracting inward investment from some of the world’s leading technology companies. 

“This is a real opportunity for North Lanarkshire, and we want to make sure local people share in it. The £543 million community fund means the benefits stay here – good jobs, new skills, and investment that actually reaches the people who live and work in this area.”

Schneider Electric’s Matthew Baynes, VP, Secure Power and Data Centres, Schneider Electric, UK & Ireland, concluded, “In the twelve months since the introduction of the AI Opportunities Action Plan, the UK has seen much progress towards its AI ambitions.

“The new AI Growth Zone (AIGZ) announced today in Lanarkshire demonstrates just how far the country has come in its plans to build a sovereign AI nation, with Scotland becoming a critical new infrastructure hub and joining those in Wales, Oxfordshire, and the Northeast of England.

“Furthermore, the country has now secured more than £31B in investment from some of the world’s largest, leading tech companies, demonstrating that the UK has the people, resources and ambition to make AI a centrepiece of a new and revitalised Industrial Strategy.

“While this can be considered a success in many respects, there is still much work to do. Access to renewable power remains one of the biggest hurdles facing many parts of the country, and as the UK’s energy technology partner for data centres and AI Infrastructure, we believe there is a clear opportunity to catalyse the both the AI and green transitions by turning data centres into the energy centres of the future – fast-tracking new developments with behind-the-meter power generation and microgrids.

“Furthermore, the AIGZ announced today could not be more timely. We believe Scotland, with its cool temperate climate and rich conditions to generate renewable energy, provides a key opportunity to create secure, scalable and sustainable infrastructure capable of galvanising the AI race. Now, the UK’s sustainability and AI ambitions must work together hand-in-glove, demonstrating that today’s technology can be a catalyst for a greener future, powered by AI.”

DCR Predicts: The new bottleneck for AI data centres isn’t technology – it’s permission

29 January 2026 at 08:23

As gigawatt-scale sites move from abstract infrastructure to highly visible ‘AI factories’, Tate Cantrell, Verne CTO, argues that grid capacity, water myths, and local sentiment will decide what actually gets built.

The industry in 2026 will need to get ready for hyper-dense, gigawatt-scale data centres, but preparation will be more complicated than purely infrastructure design. AI’s exploding computational demand is pushing designers to deliver facilities with greater density that consume a growing volume of power and challenge conventional cooling.

The growth of hyperscale campuses risks colliding with a public increasingly aware of power and water consumption. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to accept.

A growing public awareness of data centres

The sector has entered an era of scale that would have seemed implausible a few years ago. Internet giants are investing billions of dollars in facilities that redefine large-scale and are reshaping the market. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services.

But their impact extends beyond the data centre industry: the communities hosting these ‘AI factories’ are being transformed, too.

This is leading to engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with power distribution systems and cooling infrastructure. As these sites become more visible, public awareness of the resources they consume is growing. The data centre has become a local landmark – and it’s under scrutiny.

Power versus perception

Power is one area receiving attention. Data centre growth is coinciding with the perception that hyperscale operators are competing for grid capacity or diverting renewable power that might otherwise support local decarbonisation. There is no shortage of coverage suggesting data centres are pushing up energy prices, too.

These perceptions have already had consequences. In the UK, a proposed 90 MW facility near London was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete for electricity with what one campaign group leader called “power-guzzling behemoth”. In Belgium, grid operator Elia may limit the power allocated to operators to protect other industrial users.

It would not be surprising to see this reaction continue in 2026, despite the steps taken by all data centre operators to maximise power efficiency and sustainability.

Cool misunderstandings 

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100kW. The amount of heat produced in such a dense space exceeds the capabilities of air-based cooling, driving the move to more efficient liquid systems.

Yet ‘liquid cooling’ is often interpreted by the public as ‘water cooling’, feeding a perception that data centres are draining natural water sources to cool servers.

In practice, this is rarely the case. While data centres of the past have relied heavily on evaporative cooling towers to deliver lower Power Usage Effectiveness, today we see a strong and consistent trend towards lower Water Usage Effectiveness through smarter cooling and sustainable design. Developments in technology are making water-free cooling possible, too, with half of England’s data centres using waterless cooling. Many operators use non-water coolants and closed-loop systems that conserve resources.

Data centres as part of the community 

Addressing public concerns will require a change in how operators think about their place in communities. Once built, a data centre becomes part of the local fabric and the company behind it, a neighbour. Developers need to view that relationship as more than transactional. They must demonstrate that growth is supported by resilient grids capable of meeting new demand without destabilising supply or driving up cost.

Water and power are essential resources, so public concern is understandable. It’s therefore important that operators show that density and efficiency can be achieved without disproportionate environmental impact. The continued rollout of AI-ready data centres will depend as much on social alignment as on advances in chip performance.

That alignment will be tested in 2026 and beyond as another wave of high-density deployments arrives. Based on NVIDIA’s product roadmap, we already have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure.

NVIDIA’s Chief Executive Jensen Huang introduced the DSX data centre architecture at GTC 2025 in Washington DC, a framework designed to make it easier for developers with limited experience to deploy large-scale, AI-ready facilities. In effect, it offers a global blueprint for gigawatt-scale ‘AI factories’.

A positive outcome of this will be a stronger push towards supply chain standardisation. Companies such as Vertiv, Schneider Electric and Eaton are aligning around modular power and cooling systems that are easily integrated into these architectures. Nvidia, AMD and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure can be deployed, the faster their chips can deliver the required compute capacity.

Standardisation, then, becomes a commercial and operational imperative, but it also reinforces the need for transparency and shared responsibility.

Efficiency and expansion 

Behind all of this lies the computational driver: the transformer model. These AI architectures process and generate language, code or other complex data at scale — the foundation of today’s generative AI. They are, however, enormously power-hungry, and even though it’s reasonable to expect a few DeepSeek-type breakthroughs in 2026 – discoveries that achieve similar performance with far less energy thanks to advances in algorithms, hardware and networking – we shouldn’t expect demand for power to drop.

The technical roadmap during 2026 is clear. We are heading towards greater density, wider uptake of liquid cooling and further standardisation. With data centres running as efficiently and sustainably as possible, developers and operators will need to establish trust with local stakeholders for the resources required to develop and power the AI factories that will drive a new era of industrial innovation.

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

We’re going On the Record with a new column series

By:DCR
28 January 2026 at 15:20

Data Centre Review is launching a new monthly column series, dubbed On the Record, which will feature regular commentary from named contributors across the data centre industry.

The new series is designed to provide a spotlight to select voices to share perspectives on the issues shaping the sector, from resilience and energy regulation to skills and emerging technologies.

Data Centre Review has always been a town hall – a place where diverse opinions are allowed to shine, and that will continue. However, unlike one-off guest comment pieces, On the Record is structured as a recurring series, with contributors publishing on Data Centre Review each and every month. That gives our readers a consistent set of industry viewpoints to follow over time.

What to expect from On the Record

Each On the Record column will offer a direct, accountable viewpoint from a recognised organisation or specialist contributor. Topics will span the challenges and opportunities facing data centres today, including:

  • Design, build and operations best practice
  • Emerging trends and technology impacts
  • Energy, sustainability and regulation
  • Infrastructure and resilience
  • Skills, talent and leadership

We’re launching the series with two initial contributors: 

On the Record with the Data Centre Alliance – This column will bring an industry-wide perspective on standards, priorities and the big conversations influencing the sector.

On the Record with Critical Careers – This column will focus on careers and representation in the industry, with an emphasis on women entering data centres and the barriers that still exist

The first On The Record with the Data Centre Alliance is now live, exploring the topic of water scarcity and whether the UK’s data centre industry can do more when it comes to reducing its water usage. You can read that here. 

Additional contributors are expected to be added over time, expanding the range of organisations and topics represented within the series. 

DCR Predicts – UK data centres are booming – but is the power running out?

By:DCR
27 January 2026 at 08:00

A panel of experts explore why grid capacity, connection queues, and rising AI power density are starting to dictate what can be built in 2026 – and where.

The UK’s data centre boom is accelerating, fuelled by the AI gold rush. Hyperscalers are expanding campuses and investment continues to flow, but the practical limits of growth are becoming harder to ignore.

Data centres already account for around 2.5% of the UK’s electricity consumption, and with AI workloads accelerating, that could rise sharply. Power availability, grid connection delays, planning constraints and sustainability pressures are no longer background considerations. As 2026 approaches, they are actively shaping what can be built, where, and how.

Power limits are no longer theoretical

For years, efficiency improvements helped offset rising demand, but that buffer is tiring quickly as AI is pushing power density beyond what many facilities were designed to support.

Skip Levens, Quantum’s Product Leader and AI Strategist, the LTO Program, sees a clear roadblock ahead. “In 2026, AI and HPC data centre buildouts will hit a non-negotiable limit: they cannot get more power into their data centres. Build-outs and expansions are on hold and power-hungry GPU-dense servers are forcing organisations to make hard choices.”

He suggests that modern tape libraries could be the solution to two pressing problems, “First by returning as much as 75% of power to the power budget to ’spend’ on GPUs and servers, while also keeping massive data sets nearby on highly efficient and reliable tape technology.”

Whether or not operators adopt that specific approach, the wider point holds. Growth is no longer just about adding capacity – it’s about how power is allocated and conserved within fixed limits.

Sustainability under pressure

Sustainability remains a defining theme for the sector, but the pace of AI-driven expansion is testing how deeply those commitments are embedded.

Terry Storrar, Managing Director at Leaseweb UK, describes the balancing act many operators are facing, “Sustainability is still the number one topic in the data centre industry. This has to work for the planet, but also from an economic perspective.

“We can’t keep running huge workloads and adding these to the grid,” he warns, “it’s simply not sustainable for the long term. So, there is huge investment into how we make technology do more for less. In the data centre industry, this translates into achieving significant power efficiencies.”

Mark Skelton, Chief Technology Officer at Node4, agrees, warning, “Data centres already consume around 2% of national power, while unchecked growth could push that to 10-15%, at a time when the grid is already strained and struggling to keep pace with soaring demand. In some areas, new developments are being delayed simply because the grid cannot deliver the required capacity quickly enough.”

To put this into perspective, Google’s new Essex facility alone is estimated to emit the same amount of carbon as 500 short-haul flights every year.

Grid delays, planning and skills gaps

There’s also a broader question of how well prepared the UK actually is for such a rapid scale-up in data centre infrastructure,

“Currently, the rush to build is overshadowing the need for a comprehensive approach that considers how facilities draw power and utilise water, as well as how their waste heat could be repurposed for nearby housing or industry,” Node4’s Skelton continues. “The technology to do this already exists, but adoption remains limited because there is little incentive or regulation to encourage it.”

In the UK, high-capacity grid connections can take over a year to secure, while planning delays and local opposition add further friction. Another roadblock is that “communities will increasingly challenge data centre expansion over water and energy use,” warns Curt Geeting, Acoustic Imaging Product Manager at Fluke. This is “pushing operators toward self-contained microgrids, hydrogen fuel cells, and other alternative power sources. Meanwhile, a growing shortage of skilled technicians and electricians will become a defining constraint.”

Geeting believes automation and I will be key to tackling some of these infrastructure roadblocks. “The data centre test and measurement market will enter 2026 on the brink of a major transformation driven by speed, density, and intelligence. Multi-fibre connectivity will expand rapidly to meet the bandwidth demands of AI-driven workloads, edge computing, and cloud-scale growth.

“Very small form factor connectors, multi-core fibre, and even air-core fibre technologies will begin reshaping how data moves through high-density environments – enabling faster transmission with lower latency. At the same time, automation and AI will take centre stage in testing and diagnostics, as intelligent tools and software platforms automate calibration tracking, compliance verification, and predictive maintenance across vast, complex facilities.”

Edge, sovereignty and a rethink of scale

Data centres remain the backbone of the digital economy, underpinning everything from cloud services to AI and edge computing. With the rapid rise in AI, there are concerns that the UK will struggle to keep pace.

“The AWS outage reminded everyone how risky it is to depend too heavily on centralised cloud infrastructure,” urges Bruce Kornfeld, Chief Product Officer at StorMagic. “When a single technical issue can disrupt entire operations at a massive scale, CIOs are realising that stability requires balance.

“In 2026, more organisations will move toward proven on-premises hyperconverged infrastructure for mission-critical applications at the edge. This approach integrates cloud connectivity to simplify operations, strengthen uptime and deliver consistent performance across all environments. AI will continue to accelerate this shift.”

“The year ahead will favour a shift toward simplicity, uptime and management,” he adds. “The organisations that succeed will be those that figure out how to avoid downtime with simple and reliable on-prem infrastructure to run local applications. These winners understand that chasing scale for its own sake does nothing but put them in a vulnerable position.” This redistribution may ease pressure on hyperscale campuses.

Looking to 2026

Looking ahead to 2026, the pressures facing UK data centres are unlikely to ease. Power constraints, grid delays and sustainability expectations are becoming long-term issues, not just temporary obstacles. While technologies like quantum computing may eventually reshape infrastructure design, they won’t resolve the immediate challenges operators face today. The UK still has an opportunity to lead in AI and digital infrastructure, but only if growth is planned with constraint in mind. Without clearer coordination, incentives and accountability, the rush to build risks locking inefficiencies into the system for years to come. 

This article is part of our DCR Predicts 2026 series. Check back every day this week for a new prediction, as we count down the final days of January.

DCR Predicts 2026

How the West Lost the Automotive Industry

26 January 2026 at 02:22

By David Waterworth and Paul Wildman Yes, past tense. The West has already lost the dominance of the global auto industry. Why? And will the USA become the new Cuba? Recently, my writing colleague, Dr Paul Wildman, contacted me and suggested we explore these topics. What is the West’s capability ... [continued]

The post How the West Lost the Automotive Industry appeared first on CleanTechnica.

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

Creating Critical Facilities Manpower Pipelines for Data Centers

23 December 2025 at 15:00

The digital technology ecosystem and virtual spaces are powered by data – its storage, processing, and computation; and data centers are the mitochondrion on which this ecosystem depends. From online gaming and video streaming (including live events) to e-commerce transactions, credit and debit card payments, and the complex algorithms that drive artificial intelligence (AI), machine learning (ML), cloud services, and enterprise applications, data centers support nearly every aspect of modern life. Yet the professionals who operate and maintain these facilities like data center facilities engineers, technicians, and operators, remain largely unsung heroes of the information age.

Most end users, particularly consumers, rarely consider the backend infrastructure that enables their digital experiences. The continuous operation of data centers depends on the availability of adequate and reliable power and cooling for critical IT loads, robust fire protection systems and tightly managed operational processes that together ensure uptime, and system reliability. For users, however, the expectation is simple and unambiguous; online services must work seamlessly and be available whenever they are needed.

According to the Data Center Map, there are 668 data centers in Virginia, more than 4000 in the United States, and over 11,000 globally. Despite the rapid growth, the industry faces a significant challenge: it is not producing enough qualified technicians, engineers, and operators to keep pace with the growth of data center infrastructure in the United States despite an average total compensation of $70,000 which may go as high as $109,000 in Northern Virginia, as estimated by Glassdoor.

Data center professionals require highly specialized electrical and mechanical maintenance skills and knowledge of network/server operations gained through robust training and hands-on experience. Sadly, the industry risks falling short of its workforce needs due to the unprecedented scale and speed of data center construction. This growth is being fueled by the global race for AI dominance, increasing demand for digital connectivity, and the continued expansion of cloud computing services.

Industry projections highlight the magnitude of the challenge. Omdia (As reported by Data Center Dynamics) suggests data center investment will likely hit $1.6 trillion by 2030 while BloombergNEF forecasts data-center demand of 106 gigawatts by 2035. All these projects and projections demand skilled individuals which the industry does not currently have, and the vacuum might create problems in the future if not filled with the right individuals. According to the Uptime Institute’s 2023 survey, 58% of operators are finding it difficult to get qualified candidates and 55% claim they are having difficulty retaining staff. The Uptime Institute’s 2024 data center staffing and recruitment survey shows that there was 26% and 21% turnover rate for electrical and mechanical trades respectively. It was estimated by The Birmingham Group that AI facilities will create about 45,000 data center technicians and engineers jobs and employment is projected to be at 780,000 by 2030.

Meeting the current and future workforce demands requires both leveraging talent pipelines and creating new ones. Technology is growing and evolving at a high speed and filling critical data center positions increasingly demands professionals who are not only technically skilled, but also continuously trained to keep up with rapidly changing industry standards and technologies

Organizational Apprenticeship and Training Programs

Organizations should invest in organizational training and apprenticeship programs for individuals with technical training from community colleges so that they can create pipelines of technically skilled individuals to fill critical positions. This will ensure the future of critical positions within the data center industry is secured.

Trade Programs Expansion in Community College

Community colleges should expand the teachings of technical trades because these programs create life-sustaining careers with the possibility of earning high incomes. Northern Virginia Community College has spearheaded data center operations programs to train individuals who can comfortably fill entry level data center critical facilities positions in northern Virginia and everywhere else.

Veterans Re-entry Programs 

A lot of military veterans possess the required transferrable skills needed within data center critical facilities, and organizations need to leverage this opportunity. Organizations need to harness the opportunities provided by the Disabled American Veterans and DOD’s Transition Assistance Program, and other military and DOD programs.

# # #

About the Author

Rafiu Sunmonu is the Supervisor of Critical Facilities Operations at NTT Global Data Centers Americas, Inc.

The post Creating Critical Facilities Manpower Pipelines for Data Centers appeared first on Data Center POST.

❌