Normal view

Received today — 2 April 2026

ZincFive Earns TIME GreenTech Recognition for Third Straight Year

30 March 2026 at 19:00

ZincFive®, a leader in nickel-zinc (NiZn) battery-based solutions for immediate power applications, has once again been recognized by TIME, earning a place on the America’s Top GreenTech Companies 2026 list for the third consecutive year. Developed in partnership with Statista, the ranking evaluates companies based on environmental impact, financial strength, and innovation, placing ZincFive among a select group shaping the future of sustainable technology.

This year, the company ranked #142 out of more than 3,500 evaluated organizations and is one of only two companies headquartered in Oregon to be included on the list.

The recognition reflects continued momentum for ZincFive’s nickel zinc battery technology, which has gained traction as an alternative to traditional energy storage options in mission critical environments. As data centers evolve to support artificial intelligence and increasingly dynamic workloads, the need for power solutions that can deliver both performance and safety has become more pronounced.

ZincFive’s approach centers on immediate power, delivering high power density in a compact footprint while avoiding the risks associated with other battery chemistries. Nickel zinc batteries are designed to provide reliable performance without thermal runaway concerns and rely on more abundant, recyclable materials, supporting both operational and environmental goals.

For ZincFive, continued recognition from TIME signals more than a milestone. It reflects a broader shift in how the industry is evaluating power infrastructure, with greater emphasis on safety, sustainability, and long term performance.

“Earning a place on TIME’s America’s Top GreenTech Companies list for the third consecutive year reflects the growing role of nickel-zinc technology in delivering safe, sustainable power,” said Tod Higinbotham, CEO of ZincFive. He emphasized the company’s “power of good chemistry” approach to balance performance, safety, and eco-friendliness.​

The company’s inclusion builds on a series of recent awards recognizing its innovation in energy storage, particularly in applications where reliability is critical. As demand for resilient and efficient power continues to grow, ZincFive’s technology is increasingly positioned to support the next generation of digital infrastructure.

For full details, read the press release here.

The post ZincFive Earns TIME GreenTech Recognition for Third Straight Year appeared first on Data Center POST.

Company Profile: VIRTUS on Redefining Data Centre Growth in Europe

9 February 2026 at 17:30

Data Center POST had the opportunity to connect with Christina Mertens, who joined VIRTUS as VP Business Development EMEA in June of 2022. With her she brings over ten years’ experience in developing strategies for, and expanding, existing and new hyperscale infrastructure geographies across EMEA.

For the past decade, she has worked for Amazon in EMEA, where she expanded the existing AWS data centre regions in colocation and self-built facilities, as well as launched new region geographies as the country manager. In her previous role as Data Center Divestiture Principal at Amazon Web Services in EMEA, Christina worked alongside large strategic hyperscale cloud customers, advising them on their infrastructure assets and developing new models to facilitate and enhance their cloud migration journey. She is the Managing Director of Germany and Italy, responsible for overseeing all aspects of the business, including expansions, sales, data centre design, construction and operations.

The information below is summarized to provide our readers a deeper dive into who VIRTUS is, what they do and the problems they are solving in the industry.

What does VIRTUS do?  

VIRTUS is a European data centre provider and the largest in the UK. With over 10 years of experience, whichever sector a business operates in, VIRTUS tailors solutions to specific customer requirements.

What problems does VIRTUS solve in the market?

Businesses have unique workloads, project durations and changing requirements. VIRTUS’ solutions are designed to provide the digital infrastructure which supports these needs. Built to a vast scale, all of our data centres are designed modularly, allowing full flexibility for data centre customers’ requirements. Our facilities operate using 100% renewable energy and are amongst the most efficient facilities in the world.

What are VIRTUS’ core products or services?

We build AI-ready, built to suit and colocation data centres.

VIRTUS’ AI Ready Data Centres are designed to support the high performance computing (HPC) demands of artificial intelligence workloads. Our facilities provide the optimum environment for HPC deployments of any size, including the next generation of AI IT infrastructure and Machine Learning (ML) workloads, which require next generation cooling deployment and increased power per rack.

Our built to suit data centres are those designed specially for the customer. We know that organisations of all sizes need real flexibility, which is why we work with our customers to create bespoke solutions. For example, some require cutting-edge AI solutions which may require space to scale at speed, others might have a hyperscale cloud deployment that needs custom built data halls.

Our colocation service is designed to provide maximum flexibility with individual IT power and space requirements. The modular facilities are designed to scale up with customer growth. This combined with truly flexible commercials allows customers to grow in a cost efficient and unrestrictive environment.

What markets do you serve?

VIRTUS’ European data centres are strategically located in key markets; currently this is London (UK), Berlin (Germany) and Milan (Italy). As part of ST Telemedia Global Data Centres’ (STT GDC) global platform, we have a presence in ten geographies, more than 101 data centres and over 2GW of IT load across 20+ major business markets.

Our vast experience comes from working with many industry sectors – from financial institutions which require ultra-low latency, to thriving tech start ups which rely on contiguous space to grow, and providing entire buildings or campuses for the world’s largest hyperscalers.

What challenges does the global digital infrastructure industry face today?

Many current European data centres simply cannot meet the short- and long-term demands for critical digital infrastructure, often due to a shortage of infrastructure that can support high HPC workloads. It is a fundamental challenge to find land with access to renewable power to build new facilities, quickly and at scale.

For years, development revolved around a handful of key metropolitan hubs. Frankfurt, London, Amsterdam and Paris (collectively known as the FLAP locations) carried much of the continent’s cloud, enterprise and interconnection load, due to their proximity to financial services, global carriers and concentrated digital ecosystems.

Undoubtedly, whilst those hubs continue to grow, their conditions have changed. Power supply is being delayed due to parts of the electricity distribution network not being capable of transporting it, suitable land parcels are becoming scarcer and therefore more expensive to secure, and planning regulations are increasing, lengthening timelines to approvals, if they are granted at all.

Meanwhile, demand for computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, HPC, analytics and modernised public services all require significant and sustained energy and cooling capacity.

McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It is clear that Europe needs more digital infrastructure, but it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is partly why what is sometimes known as the second-tier locations are becoming increasingly more critical to expanding Europe’s digital architecture.

Over the next five years, this is not a marginal shift. Analysts expect Europe’s installed data centre capacity to more than double, from roughly 24 GW in 2025 to around 55 GW by 2030, with secondary markets growing fastest. And, while recent CBRE analysis indicates that in 2025, around 57% of new capacity will still be delivered in the core FLAP-D markets, the remaining 43% will come from secondary locations such as Milan, Madrid and Berlin, many of which are now on track to exceed 100 MW of installed capacity in their own right. This is the context in which tier two locations are moving from “nice to have” to essential if Europe is to keep pace with global demand.

How is VIRTUS adapting to these challenges?

Our strategy is to build new facilities at scale, located close to, but not necessarily in major European metropolitan cities, and supplied with renewable energy.

We are currently building a €3bn 300MW data centre campus development at Wustermark, west of Berlin. Wustermark offers what many central locations cannot – land large enough for a multi-building campus, access to sustainable electricity, proximity to rail and motorway networks, and alignment with Germany’s policy focus on digital capacity. The site is also positioned to benefit from Germany’s wider energy and grid modernisation programmes, including access to renewable energy to power the campus as it is adjacent to Germany’s largest on-shore windfarms capable via a substation and direct coupling, of fulfilling the energy requirements of the facility.

This move towards larger campuses is a calculated strategy that acknowledges the non-linear cost relationship inherent in these types of operations; larger megascale campuses capable of 200-500MWs can often afford providers – and therefore customers – greater efficiencies.

We are also constructing another facility in Italy. Located in Cornaredo, within the Milan West data centre cluster the site will provide ample capacity to support hyperscalers, enterprises and service providers as digital infrastructure demands in Europe continue to grow.

What are VIRTUS’s key differentiators?

What sets VIRTUS apart from our competitors can be found in many aspects of the design, build and operations of our facilities. However, the quality of operations – the Operational Excellence – is where we truly excel. The way we have implemented design innovations makes a difference to the service we provide in terms of efficiency and resilience. It’s how we design, build, test, maintain, change and operate our facilities that differentiates us – ensuring robust and reliable availability is delivered.

What can we expect to see/hear from VIRTUS in the future?  

It’s an exciting time for VIRTUS Europe, but to meet customer demand we’re still increasing our presence as the leader in the UK market, opening two new London data centres in 2026 (LONDON12 and LONDON14) and in the near future a large four data centre campus at Saunderton, whilst continuing our European expansion.

What upcoming industry events will you be attending? 

The VIRTUS team is attending the following events: Platform UK where Adam Eaton will be speaking on a keynote panel, Energy Storage Summit where Helen Kinsman will be speaking on a panel, Compute Summit where Ramzi Charif will be speaking on a panel, and finally Datacloud Energy where Helen Kinsman will be speaking on another panel.

Do you have any recent news you would like us to highlight?

Earlier in 2026 we announced VIRTUS’ new CEO, Adam Eaton. Under his leadership, we will continue to expand our portfolio of high-efficiency, sustainable data centres, building on more than a decade of rapid growth across the UK and Europe. VIRTUS remains committed to its vision to deliver world-class, energy-efficient infrastructure that supports the growth of the digital economy.

Where can our readers learn more about VIRTUS?  

You can learn more about us on our website, www.virtusdatacentres.com.

How can our readers contact VIRTUS? 

You can contact us through the form on our website, www.virtusdatacentres.com/contact-us.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: VIRTUS on Redefining Data Centre Growth in Europe appeared first on Data Center POST.

Company Profile: GreenScale on Building Sustainable, Power-Rich Digital Infrastructure

5 February 2026 at 17:30

Data Center POST had the opportunity to connect with Jean-François Berche, the Chief Technology Officer at GreenScale, who is guiding the company’s technological vision towards infrastructure that is scalable, efficient, and above all, sustainable. He focuses on developing data centres capable of supporting the complex needs of AI-driven workloads, while ensuring GreenScale leads in technology integration within the energy ecosystem.

Jean-François previously held senior roles at Microsoft and AWS, where he was instrumental in expanding the cloud infrastructure to meet the growing demands of AI. His extensive work in site selection, colocation, and cloud region expansion at Microsoft and AWS positions him to drive GreenScale’s technological capabilities to the pinnacle of what is possible.

His passion for sustainability in technology is well-aligned with GreenScale’s mission. Outside of work, Jean-François remains committed to exploring how technology can positively impact society through sustainable and innovative practices. The interview information below has been summarized to provide readers with clarity into who GreenScale is, what they do and the problems they are solving in the industry.

What does GreenScale do?  

GreenScale is a sustainable data centre platform redefining the future of sustainable digital infrastructure across Europe’s expanding data centre markets.

What problems does GreenScale solve in the market?

As demand for high-performance AI and cloud workloads accelerates, power availability, grid constraints, and environmental impact have become critical bottlenecks. At GreenScale, we are developing a sustainable data centre platform that positively contributes to the grid, local communities, and the wider energy ecosystem. We provide access to long-term power scalability, combined with deep local relationships with grid utilities and local communities, to enable customers to grow compute capacity quickly, efficiently, and responsibly.

What are GreenScale’s core products or services?

Digital infrastructure

What markets do you serve?

We’re developing data centres in Europe, with plans for international expansion.

What challenges does the global digital infrastructure industry face today?

The global digital infrastructure industry faces the challenge of scaling AI and cloud capacity amid constrained power availability, grid limitations, and growing environmental concerns.

How is GreenScale adapting to these challenges?

Sustainability at GreenScale starts with site selection. By focusing on new power-rich regions such as Norway, where hydropower is abundant, and Derry/Londonderry, where strong wind resources support renewable energy generation, we secure clean, scalable energy from the outset. Working closely with local utilities allows us to contribute positively to the grid while accelerating speed to deployment and enabling responsible, long-term growth for digital infrastructure.

What are GreenScale’s key differentiators?

GreenScale’s key differentiators lie in our ability to deliver at speed while maintaining a strong sustainability focus. We prioritise rapid deployment through strategic partnerships, including our recently announced collaboration with Vertiv, and by building in new power-rich markets that support long-term scalability. Our platform is underpinned by a deep commitment to ESG and led by a team with over 100 years of combined industry experience, enabling us to execute reliably in a rapidly evolving market.

What upcoming industry events will you be attending? 

PTC, NVIDIA GTC, DCAC, Data Centre Expo, Data Centre World London, Datacloud Global Congress and many more!

Do you have any recent news you would like us to highlight?

Vertiv and GreenScale Announce Strategic Collaboration to Deploy AI-Ready Data Centre Platforms across Europe.

Where can our readers learn more about GreenScale?  

Readers can learn more on our company website, www.greenscaledc.com.

How can our readers contact GreenScale? 

You can contact us through our website, www.greenscaledc.com/contact.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: GreenScale on Building Sustainable, Power-Rich Digital Infrastructure appeared first on Data Center POST.

Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus

4 February 2026 at 17:30

Data Center POST had the opportunity to connect with Carlo Malana, President and CEO of STT GDC Philippines, which is a joint venture among Globe Telecom, Ayala Corporation and ST Telemedia Global Data Centres. The company provides secure, reliable, and sustainable data centers to enable digital transformation for global and local businesses. With more than two decades of diverse leadership experience in the ICT industry, his background includes strategic roles at AT&T and as CIO for Globe. He earned a double degree from the University of California at Berkeley and an MBA from Southern Methodist University.

With over 20 years in Information Communications Technology (ICT) including roles with AT&T, across the United States, Mexico, and the Philippines, he has led both technology and business organizations in such diverse areas as strategy, program management, merger integration, retail, finance, customer operations, and sales.

The interview information below has been summarized to provide readers with clarity into who STT GDC Philippines is, what they do and the problems they are solving in the industry.

What does STT GDC Philippines do?  

ST Telemedia Global Data Centres (STT GDC) Philippines empowers business digital transformation through a service model integrating Colocation, Cross connect, and Support Services. We provide Colocation via scalable, sustainable, and secure infrastructure operated to strict global standards, a commitment recently validated by our flagship 124MW STT Fairview Data Center Campus, achieving the IDCA G2 Design Certification, and our STT Cavite 1 data center earning the Uptime Institute Tier III Design Certification. While our Interconnect & Connectivity solutions provide a carrier-neutral platform optimized for seamless access to hybrid and multi-cloud environments, our Support Services complement this technology as your extended technical team, managing critical facility operations so you can focus exclusively on your core business performance.

What problems does STT GDC Philippines solve in the market?

STT GDC Philippines addresses the critical shortage of high-quality digital infrastructure in Southeast Asia (SEA) by replacing outdated systems with massive, scalable facilities built for the future. We solve the capacity shortfall by delivering hyperscale-ready infrastructure, such as our 124MW STT Fairview campus, designed to meet the rigorous TIA-942 Rated 3 and Uptime Institute Tier III standards for concurrent maintainability. We specifically address the urgent demand for AI and high-performance computing by building AI-ready facilities equipped with high power density and advanced liquid cooling support. Most importantly, we eliminate downtime concerns by providing SLA-backed availability, ensuring your mission-critical business operations remain secure and stable 24/7 with a sustainable environment. Finally, we remove connectivity restrictions through our carrier-neutral ecosystem, providing a resilient platform that offers customers superior network choice and the flexibility to connect with the partners that best serve their requirements.

What are STT GDC Philippines’s core products or services?

Our core services are colocation, cross connect, and support services.

What markets do you serve?

ST Telemedia Global Data Centres (STT GDC) Philippines is a leading carrier-neutral provider dedicated to supporting the high-density requirements of Hyperscalers, AI companies, and large enterprises in the banking, financial services, and telecommunications sectors.

As a joint venture between Globe Telecom, Ayala Corporation, and STT GDC, we enable digital transformation by offering scalable, sustainable, and secure infrastructure designed for mission-critical applications. Our facilities are specifically optimized for high-performance workloads, leveraging strategic partnerships with industry leaders and partners to deploy advanced solutions such as liquid cooling for AI-driven demands.

Our data centers provide a flexible technology foundation with direct access to major global cloud platforms and a diverse ecosystem of connectivity partners. This carrier-neutral approach ensures optimal connectivity for hybrid and multi-cloud environments, while our strict operational excellence and 24/7 on-site technical expertise deliver industry-leading uptime. By integrating these best-in-class partnerships, we allow your organization to rely completely on our reliable infrastructure while you focus on driving your core business growth.

What challenges does the global digital infrastructure industry face today?

The industry is currently facing a massive energy and power crisis, where securing reliable electricity has become significantly harder than finding physical land. Because AI operations consume vast amounts of energy, they place an immense strain on local power grids, making it difficult for operators to find suitable locations while sticking to green energy goals.

Secondly, the rapid adoption of AI has created a thermal management challenge; the extreme heat generated by modern high-performance chips exceeds the limits of traditional air cooling, forcing a pivot toward advanced liquid cooling methods even as universal standards remain undefined.

Finally, geopolitical instability and supply chain disruptions are acting as a major brake on progress. Rising global tensions are complicating where secure networks can be built, while acute shortages of essential equipment, like high-voltage transformers and backup generators, are delaying construction and preventing the infrastructure from keeping pace with global demand.

How is STT GDC Philippines adapting to these challenges?

STT GDC Philippines is adapting by building flexible, high-capacity infrastructure, such as the 124 MW STT Fairview Data Center Campus, that is fully ready for AI and liquid cooling but remains adaptable to changing technology rather than being limited to a single purpose. We are addressing the energy challenge by committing to 100% renewable energy for our operations. To navigate global instability, we maintain a fairly neutral position as a carrier-neutral platform, ensuring resilience and open choices for all networks.

What are STT GDC Philippines’s key differentiators?

Our key differentiators begin with our adherence to global standards, ensuring that every facility in our portfolio operates with the same rigor and reliability found across our international platform. This foundation allows us to provide the most extensive capacity in the region, highlighted by the 124MW STT Fairview Data Center Campus, the largest, most interconnected carrier-neutral, and sustainable data center in the Philippines. Our commitment to international, sustainability-driven design is evident in our LEED Gold and TIA-942 Rated 3 certifications, as well as our “AI-ready” infrastructure that supports liquid cooling to reduce environmental impact.

Beyond physical assets, we prioritize our talent through the DC Power Up program, a milestone initiative that trains and certifies the next generation of data center professionals to ensure a future-ready workforce. Our operational excellence is the heartbeat of our business, utilizing advanced automation and AI-powered cooling to maintain peak efficiency 24/7. Finally, we leverage deep local expertise through our powerful partnership with Globe and Ayala, combining the country’s leading telecommunications reach and corporate heritage to provide customers with a seamless, trustworthy gateway into the Philippine digital economy.

What can we expect to see/hear from STT GDC Philippines in the future?  

STT GDC Philippines is focused on rapidly scaling its delivery capabilities, a goal already in motion as we begin operating with our first customers at STT Fairview 1. This marks a significant milestone for what will be the largest and most AI-ready data center campus in the Philippines, featuring infrastructure specifically engineered for high-density computing and advanced liquid cooling. Our commitment to innovation is further showcased at our AI Synergy Lab, where we demonstrate the future of thermal management and high-efficiency power solutions. To support this growth, we are accelerating partnerships across the ecosystem by  recently onboarding key connectivity partners to ensure our facilities serve as the premier, carrier-neutral gateway for Southeast Asia’s digital future.

What upcoming industry events will you be attending? 

We are excited to represent STT GDC Philippines at two of the most influential technology gatherings in the region and the world this year. This February, our team will be in Jakarta for APRICOT 2026, the Asia Pacific region’s premier internet operations and networking summit. This event is a critical forum for us to collaborate with network engineers and policymakers to strengthen the digital fabric of Southeast Asia. Following this, we will be attending NVIDIA GTC in March in San Jose, California. Often called the “Super Bowl of AI,” GTC is where we engage with the latest breakthroughs in AI infrastructure and high-performance computing, ensuring that our data centers remain at the cutting edge of the global AI revolution.

Do you have any recent news you would like us to highlight?

We are excited to share several major milestones that underscore our rapid growth and commitment to the Philippines’ digital future. Most recently, in October 2025, we announced the onboarding of our first connectivity partners at our flagship STT Fairview Data Center campus. These partnerships are significant for our carrier-neutral ecosystem, providing customers with diverse network choices and the resilience needed for AI-powered growth. Additionally, the 124MW STT Fairview Data Center campus recently achieved the prestigious IDCA G2 Design Certification, recognizing its world-class N+1 design and operational excellence. On the sustainability front, we are proud to have transitioned to 100% renewable energy across all our operational data centers as of early 2025.

Is there anything else you would like our readers to know about STT GDC Philippines and capabilities?

Finally, we want your readers to know that STT GDC Philippines is actively pioneering the future of high-performance computing through our AI Synergy Lab. Launched in collaboration with industry leaders, the lab allows enterprises to run actual AI workloads in a controlled environment, providing a live showroom for high-density computing solutions that are essential for modern digital transformation. By bridging the gap between theoretical AI potential and real-world deployment, the AI Synergy Lab ensures that our partners can optimize their hardware configurations for maximum performance and efficiency. This initiative reinforces our commitment to making the Philippines a premier hub for AI innovation in Southeast Asia, providing the specialized environment required to support the next generation of intelligent computing.

Where can our readers learn more about STT GDC Philippines?  

Readers can learn more on our company website, www.sttelemediagdc.com/ph-en.

How can our readers contact STT GDC Philippines? 

You can contact us through Facebook, Linkedin, or our website.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus appeared first on Data Center POST.

Received before yesterday

How to avoid drowning in data at the expense of freshwater supplies

28 January 2026 at 15:18

TechBuyer’s Astrid Wynne argues that as AI drives up cooling demand, water stewardship must become a core design principle – not an afterthought.

As artificial intelligence accelerates demand for data centre capacity, the conversation around sustainability is shifting. Energy efficiency has long dominated the agenda, but water, the silent resource underpinning cooling systems, has emerged as a critical concern.

Scoping the problem on site and throughout operations, and providing practical guidance to avoid extra strain on freshwater use, were key aims of The Data Centre Alliance’s Drowning in Data best practice paper, published in October 2025. Developed by leading industry experts, the paper explains how to avoid freshwater use, how to account for the water footprint of energy use, and how to maximise water efficiency in cooling systems.

Growing awareness of water scarcity

Water scarcity is no longer a distant threat. Today, four billion people experience severe water stress for at least one month each year, according to a 2025 World Economic Forum report. In the UK, the deficit between the infrastructure capacity to provide clean water and the demands placed on it by agriculture, housing and industrial needs is in the billions of litres a day. The growing number of data centres, and reports of their on-site water use, began to raise alarm bells in the mainstream press in early 2025.

With Keir Starmer’s announcement of projected ‘AI Growth Zones’ early in the year came articles from the BBC raising concern that the UK’s AI ambitions could lead to water shortages. While it is true that high-density computing drives up cooling requirements, there are also numerous technologies to address this.

Large evaporative cooling towers, which can consume tens of thousands of cubic metres a year, are not popular in the UK. By August, a techUK report had found that half of England’s data centres now use waterless cooling. Other reports also suggested that used water could be deployed to cool data centres.

Industry guidance

Just as with carbon emissions, data centre water consumption is an issue both on site and through the energy supply chain. The authors of the Drowning in Data paper recognised this early on and structured the guidance around water efficiency in the cooling system; the type of water drawn on site and how it can be treated; and the water footprint of the energy supply.

The paper shows that operators, vendors and policymakers are collaborating to tackle water use with the same rigour applied to energy efficiency—and recognises that it is a system with many moving parts.

The fundamentals of water stewardship

The paper outlines six actionable principles for reducing water impact. It also recognises that these are interrelated, and that they have a relationship with energy efficiency. A brief overview is given below:

  1. Evaluate cooling systems
    Not all cooling systems are created equal. Designs for a 5 MW data centre in London that involve cooling towers can be around 38,000 m³/year, whereas adiabatic coolers can be around 800 m³/year, and dry coolers would result in no direct water use. Selecting the right technology can cut water use by orders of magnitude.
  2. Minimise the water footprint of the energy used
    Beyond direct consumption, electricity generation carries an embedded water cost. No studies have yet defined the proportion for AI workloads, but studies on another intensive compute operation – Bitcoin – suggest that most of this sits in the energy footprint. Maximising energy efficiency, and using energy supplies with lower water footprints, is a key part of good water stewardship.
  3. Design with the surrounding environment in mind
    Cooling systems must take into account the surrounding environment in order to balance savings in direct water use (through reduced cooling demand) with indirect water waste through increased electricity use overall.
  4. Design with non-potable water in mind
    Grey water systems and rainwater harvesting can offset potable water demand, reducing strain on municipal supplies. However, different water qualities require different levels of electricity to make them suitable for cooling systems, and this needs to be considered.
  5. Apply systems thinking
    The surrounding community’s needs also play a part. In water-stressed areas, reducing direct water use will be a priority. In cooler, wetter areas, priority may shift towards the benefits of heat generation from the data centre—captured by direct-to-chip cooling and fed into district heating systems.
  6. Introduce circular economy principles for hardware refresh
    Extending IT equipment life and promoting reuse reduces embodied water in manufacturing – a hidden but significant component of total water impact. According to the Green Electronics Council, the manufacture of a single server requires 1,500–2,000 gallons of water.

Where next for water use in the data centre sector

Continuing press coverage in recent months shows that data centres are under scrutiny for their water use in a way that other sectors are not. A December 2025 article in The Guardian is one such example. With researchers increasingly turning towards the water footprint of AI, mainstream media is becoming more aware of indirect water consumption as a result of energy use.

No similar stories circulate about heavy industry or manufacturing, which are more established and more likely to fly under the radar. Whether or not this is fair is a moot point; water is the next frontier in data centre sustainability. As the industry scales to meet digital demand, water stewardship must become a core design principle, not an afterthought.

The Drowning in Data paper provides insight into how the sector can address this with an approach that balances operational resilience with environmental responsibility. However, it is just the start of a long, complex process of understanding impacts and balancing competing demands. The Data Centre Alliance welcomes suggestions and collaborations that can move the conversation forward. 

You can read the full paper and join the discussion at dcauk.org.

Waste heat from UK data centres could heat 3.5m+ homes

27 January 2026 at 11:00

Waste heat from the UK’s latest crop of data centres could be used to heat at least 3.5 million homes by 2035, according to new research that argues the country risks letting a major low-carbon heat source go unused without investment in heat network infrastructure.

The analysis, produced by heat mapping organisation EnergiRaven in partnership with Danish energy and sustainability consultancy Viegand Maagøe, links projected growth in data centres to a significant rise in recoverable ‘waste’ heat. It estimates that data centres could provide enough heat for between 3.5 million and 6.3 million homes by 2035, depending on factors including the efficiency and design of future facilities.

The research lands as the UK grapples with two parallel challenges: the rapid expansion of energy-hungry digital infrastructure to support cloud computing and AI, and the long-running difficulty of decarbonising heat – still dominated by gas boilers across much of the housing stock.

EnergiRaven argues that many existing and planned data centres are located close to proposed new towns and to communities facing higher levels of fuel poverty, raising the prospect of linking local heat demand with a growing heat supply that would otherwise be rejected into the atmosphere.

“Our national grid will be powering these data centres – it’s madness to invest in the additional power these facilities will need, and waste so much of it as unused heat, driving up costs for taxpayers and bill payers,” commented Simon Kerr, Head of Heat Networks at EnergiRaven.

“Microsoft has said it wants its data centres to be ‘good neighbours’. Giving heat back to their communities should be an obvious first step.”

How Manchester could be an ideal pilot

The report points to Greater Manchester as one area where this alignment could be particularly strong. It notes plans for around 15,000 homes at the Victoria North development and a further 14,000-20,000 at Adlington, alongside clusters of fuel poverty.

At the same time, the analysis highlights a concentration of data centre infrastructure around the city region, including more than a dozen existing sites and four additional facilities planned. EnergiRaven argues that, in theory, this proximity could make it easier to connect heat sources and new developments – provided heat networks are planned early enough, and built at sufficient scale.

More broadly, the research suggests the same pattern appears across the UK: growth in data centres is expected to increase the amount of recoverable heat, but the ability to use it will depend on whether networks exist to move that heat into nearby homes and buildings.

How heat networks work

Capturing waste heat typically requires a heat network: insulated pipework that transports hot water from a heat source to buildings, where heat interface units (HIUs) can replace individual gas boilers. The report notes that waste heat recovery is widely used across parts of northern Europe, particularly in Nordic countries, where major sources of waste heat — including data centres, power stations and other industrial processes — are more routinely integrated into district heating systems.

In the UK, heat networks remain a comparatively small part of the heating mix, but policy has been moving to encourage growth. Some cities have already been designated as ‘Heat Network Zones’, where heat networks are assessed as the cheapest low-carbon option for decarbonising heat locally.

Regulatory changes are also on the horizon. Ofgem is due to take over regulation of heat networks in 2026, and new technical standards will be introduced through the Heat Network Technical Assurance Scheme (HNTAS), intended to improve consumer protections and investor confidence.

The Government’s recent Warm Homes Plan also includes a target to double the share of heat demand met by heat networks in England to 7% (27 TWh) by 2035, with a longer-term expectation that heat networks could supply around a fifth of all heat by 2050. It also pledges £195 million per year through the Green Heat Network Fund to support heat network development.

However, EnergiRaven argues that current policy settings still fall short of what would be needed to take full advantage of large-scale waste heat from data centres.

“Current policy in the UK is nudging us towards a patchwork of small networks that might connect heat from a single source to a single housing development. If we continue down this road, we will end up with cherry-picking and small, private monopolies – rather than national infrastructure that can take advantage of the full scale of waste heat sources around the country,” Kerr added.

“We know that investment in heat networks and thermal infrastructure consistently drives bills down over time and delivers reliable carbon savings, but these projects require long-term finance. Government-backed low-interest loans, pension fund investment, and institutions such as GB Energy all have a role to play in bridging this gap, as does proactivity from local governments, who can take vital first steps by joining forces to map out potential networks and start laying the groundwork with feasibility studies.”

A “heat highways” argument — and what it would change

A central recommendation in the analysis from EnergiRaven is the need for larger, strategic networks – which it describes as ‘Heat Highways’ – capable of transporting waste heat over longer distances and linking multiple sources and demand centres. The report suggests that smaller, isolated schemes may struggle to exploit the growing scale of data centre waste heat, particularly as facilities cluster in certain regions rather than being evenly spread across the UK.

Viegand Maagøe’s Peter Maagøe Petersen argues that building larger thermal networks could also provide benefits beyond household heating, including grid balancing and energy security.

“We should see waste heat as a national opportunity. In addition to heating homes, heat highways can also reduce strain on the electricity grid and act as a large thermal battery, allowing renewables to keep operating even when usage is low, and reducing reliance on imported fossil fuels. As this data shows, the UK has all the pieces it needs to start taking advantage of waste heat – it just needs to join them together,” he noted.

“With denser cities than its Nordic neighbours, and a wealth of waste heat on the horizon, the UK is a fantastic place for heat networks. It needs to start focusing on heat as much as it does electricity – not just for lower bills, but for future jobs and energy security.”

The underlying message from both organisations is blunt: data centre growth is already being planned and powered. The question is whether the UK will treat the heat those facilities inevitably produce as a resource – or continue to design energy infrastructure that ignores it.

IEW 2026 Concludes with Strong Affirmation of India’s Energy Leadership and Innovation Excellence

31 January 2026 at 04:34

India Energy Week (IEW) 2026 concluded in Goa with a strong affirmation of India’s preparedness, resilience, and expanding leadership role in the global energy landscape amid ongoing geopolitical uncertainties. Addressing […]

The post IEW 2026 Concludes with Strong Affirmation of India’s Energy Leadership and Innovation Excellence appeared first on SolarQuarter.

4 Weird Things You Can Turn into a Supercapacitor

22 October 2025 at 16:00


What do water bottles, eggs, hemp, and cement have in common? They can be engineered into strange, but functional, energy-storage devices called supercapacitors.

As their name suggests, supercapacitors are like capacitors with greater capacity. Similar to batteries, they can store a lot of energy, but they can also charge or discharge quickly, similar to a capacitor. They’re usually found where a lot of power is needed quickly and for a limited time, like as a nearly instantaneous backup electricity for a factory or data center.

Typically, supercapacitors are made up of two activated carbon or graphene electrodes, electrolytes to introduce ions to the system, and a porous sheet of polymer or glass fiber to physically separate the electrodes. When a supercapacitor is fully charged, all of the positive ions gather on one side of the separating sheet, while all of the negative ions are on the other. When it’s discharged, the ions are randomly distributed, and it can switch between these states much faster than batteries can.

Some scientists believe that supercapacitors could become more super. They think there’s potential to make these devices more sustainably, at lower-cost, and maybe even better performing if they’re built from better materials.

And maybe they’re right. Last month, a group from Michigan Technological University reported making supercapacitors from plastic water bottles that had a higher capacitance than commercial ones.

Does this finding mean recycled plastic supercapacitors will soon be everywhere? The history of similar supercapacitor sustainability experiments suggests not.

About 15 years ago, it seemed like supercapacitors were going to be in high demand. Then, because of huge investments in lithium-ion technology, batteries became tough competition, explains Yury Gogotsi, who studies materials for energy-storage devices at Drexel University, in Philadelphia. “They became so much cheaper and so much faster in delivering energy that for supercapacitors, the range of application became more limited,” he says. “Basically, the trend went from making them cheaper and available to making them perform where lithium-ion batteries cannot.”

Still, some researchers remain hopeful that environmentally friendly devices have a place in the market. Yun Hang Hu, a materials scientist on the Michigan Technological University team, sees “a promising path to commercialization [for the water-bottle-derived supercapacitor] once collection and processing challenges are addressed,” he says.

Here’s how scientists make supercapacitors with strange, unexpected materials:

Water Bottles

It turns out your old Poland Spring bottle could one day store energy instead of water. Last month in the journal Energy & Fuels, the Michigan Technological University team published a new method for converting polyethylene terephthalate (PET), the material that makes up single-use plastic water bottles, into both electrodes and separators.

As odd as it may seem, this process is “a practical blueprint for circular energy storage that can ride the existing PET supply chain,” says Hu.

To make the electrodes, the researchers first shredded bottles into 2-millimeter grains and then added powdered calcium hydroxide. They heated the mixture to 700 °C in a vacuum for 3 hours and were left with an electrically conductive carbon powder. After removing residual calcium and activating the carbon (increasing its surface area), they could shape the powder into a thin layer and use it as an electrode.

The process to produce the separators was much less intensive—the team cut bottles into squares about the size of a U.S. quarter or a 1-euro coin and used hot needles to poke holes in them. They optimized the pattern of the holes for the passage of current using specialized software. PET is a good material for a separator because of its “excellent mechanical strength, high thermal stability, and excellent insulation,” Hu says.

Filled with an electrolyte solution, the resulting supercapacitor not only demonstrated potential for eco- and finance-friendly material usage, but also slightly outperformed traditional materials on one metric. The PET device had a capacitance of 197.2 farads per gram, while an analogous device with a glass-fiber separator had a capacitance of 190.3 farads per gram.

Eggs

Wait, don’t make your breakfast sandwich just yet! You could engineer a supercapacitor from one of your ingredients instead. In 2019, a University of Virginia team showed that electrodes, electrolytes, and separators could all be made from parts of a single object—an egg.

First, the group purchased grocery store chicken eggs and sorted their parts into eggshells, eggshell membranes, and the whites and yolks.

They ground the shells into a powder and mixed them with the egg whites and yolks. The slurry was freeze-dried and brought up to 950 °C for an hour to decompose. After a cleaning process to remove calcium, the team performed heat and potassium treatments to activate the remaining carbon. They then smoothed the egg-derived activated carbon into a film to be used as electrodes. Finally, by mixing egg whites and yolks with potassium hydroxide and letting it dry for several hours, they formed a kind of gel electrolyte.

To make separators, the group simply cleaned the eggshell membranes. Because the membranes naturally have interlaced micrometer-size fibers, their inherent structures allow for ions to move across them just as manufactured separators would.

Interestingly, the resulting fully egg-based supercapacitor was flexible, with its capacitance staying steady even when the device was twisted or bent. After 5,000 cycles, the supercapacitor retained 80 percent of its original capacitance—low compared to commercial supercapacitors, but fairly on par for others made from natural materials.

Hemp

Some people may like cannabis for more medicinal purposes, but it has potential in energy storage, too. In 2024, a group from Ondokuz Mayıs University in Türkiye used pomegranate hemp plants to produce activated carbon for an electrode.

They started by drying stems of the hemp plants in a 110 °C oven for a day and then ground the stems into a powder. Next, they added sulfuric acid and heat to create a biochar, and, finally, activated the char by saturating it with potassium hydroxide and heating it again.

After 2,000 cycles, the supercapacitor with hemp-derived electrodes still retained 98 percent of its original capacitance, which is, astoundingly, in range of those made from nonbiological materials. The carbon itself had an energy density of 65 watt-hours per kilogram, also in line with commercial supercapacitors.

Cement

It may have a hold over the construction industry, but is cement coming for the energy sector, too? In 2023, a group from MIT shared how they designed electrodes from water, nearly pure carbon, and cement. Using these materials, they say, creates a “synergy” between the hydrophilic cement and hydrophobic carbon that aids the electrodes’ ability to hold layers of ions when the supercapacitor is charged.

To test the hypothesis, the team built eight electrodes using slightly different proportions of the three ingredients, different types of carbon, and different electrode thicknesses. The electrodes were saturated with potassium chloride—an electrolyte—and capacitance measurements began.

Impressively, the cement supercapacitors were able to maintain capacitance with little loss even after 10,000 cycles. The researchers also calculated that one of their supercapacitors could store around 10 kilowatt-hours—enough to serve about one third of an average American’s daily energy use—though the number is only theoretical.

Wintershall Dea joins UK Poseidon CCS project

14 November 2023 at 10:53
Acquires 10% stake from Carbon Catalyst Poseidon to store 40 mil mt/year from 2029 Estimated total storage capacity 1 billion mt German oil and gas producer Wintershall Dea has joined the Poseidon car

ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions

15 January 2026 at 14:00

ESI Total Fuel Management is expanding its Hydrotreated Vegetable Oil (HVO/R99) services to help data centers and other mission-critical facilities advance their sustainability strategies without sacrificing reliability. With this move, the company is deepening its role as a long-term partner for operators pursuing Net-Zero 2030 goals in an increasingly demanding digital infrastructure landscape.​

Advancing data center sustainability

Across the data center industry, operators are under growing pressure to reduce the environmental impact of standby power systems while maintaining assured uptime. ESI draws on decades of experience in fuel lifecycle management, having previously championed ultra-low sulfur diesel adoption, to guide customers through the transition to renewable diesel.​

To support practical and scalable adoption, ESI has established the first secure HVO/R99 supply chain on the East Coast, giving operators dependable access to renewable diesel as part of a long-term fuel strategy. This infrastructure enables data center and mission-critical operators to integrate HVO into their operations as a realistic step toward emissions reduction and operational continuity.​

Renewable diesel performance benefits

HVO/R99 can reduce carbon emissions by up to 90 percent compared with conventional diesel, while maintaining strong cold-weather performance and long-term fuel stability suited to standby generator storage cycles. As a drop-in fuel, it requires no modifications to existing infrastructure and directly supports Scope 1 emissions reduction initiatives.​

Integrated lifecycle approach

Within ESI’s broader portfolio, HVO is one component of a comprehensive approach encompassing fuel quality, monitoring, compliance, and system resiliency.

“Sustainability goals do not replace the need for resiliency, and they can be complementary,” said Alex Marcus, CEO and president of ESI Total Fuel Management. “Our focus is helping customers implement solutions that are technically sound and operationally proven. By managing the entire fuel lifecycle, from supply and storage to monitoring, consumption, and pollution control, we help customers reduce environmental impact while maintaining resilient, mission-critical systems.”​

Supporting Net-Zero 2030 objectives

For data center operators pursuing Net-Zero 2030, ESI provides the engineering expertise, infrastructure, and operational support needed to move beyond isolated initiatives toward coordinated, data-driven fuel strategies. This combination of renewable fuel options and full lifecycle management helps strengthen both sustainability and resiliency for mission-critical environments.​

Read the full release here.

The post ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

Inside the 2025 INCOMPAS Show and the Convergence of Policy Infrastructure and AI

29 December 2025 at 15:00

The 2025 INCOMPAS Show, held November 2–4 at the JW Marriott and Tampa Marriott Water Street in Tampa, Florida, brought together more than 3,000 leaders across communications, broadband, fiber, and technology sectors to explore the evolving landscape of connectivity and competition. One of the most influential gatherings in the U.S. communications ecosystem, the event provided a platform for senior executives, policymakers, and innovators to align on strategies shaping the future of broadband deployment, infrastructure investment, and digital transformation.

This year’s theme of collaboration and convergence set the tone for a comprehensive agenda that highlighted how technology, policy, and innovation are coming together to expand connectivity and bridge the digital divide. Across three days of panels, workshops, and executive-level discussions, speakers addressed the accelerating impact of AI, automation, and public-private partnerships on both network operations and competitive strategy.

The Convergence Era: Policy, Infrastructure, and AI

The opening remarks emphasized the urgency of convergence in today’s communications landscape. Chip Pickering, CEO of INCOMPAS, framed the event with a focus on consolidation, critical infrastructure, and the growing interdependence of networks, power, and policy.

That theme carried into high-profile sessions featuring executives from Verizon, Lumen Technologies, and Bluebird Fiber, where speakers examined how fiber density, cloud connectivity, and edge infrastructure are reshaping both network design and M&A strategy. Panels such as Future-Proofing the Network and Strategic Convergence: How Wireline-Wireless Integration Is Impacting M&A highlighted how capacity planning and integration are now central drivers of transaction value.

AI-driven transformation emerged as a defining force throughout the agenda. In the session Powering Intelligence: The Convergence of Energy, Networks, and AI Infrastructure, leaders including Jeff Uphues, CEO, DC BLOX and Dan Davis, CEO and Co-Founder, Arcadian Infracom explored the mounting energy demands of AI workloads and the need for resilient, scalable infrastructure. Discussions emphasized that AI is no longer an overlay, but a foundational consideration in network architecture, power strategy, and long-term investment planning.

Cybersecurity also took center stage, with experts from Granite Telecommunications, UNITEL, Axcent Networks, and Verizon Partner Solutions outlining how AI is being deployed to detect threats, automate responses, and protect increasingly complex telecom environments.

Policy at the Center of Broadband Expansion

Policy reform remained a cornerstone of the INCOMPAS agenda. Sessions focused on the future of the Universal Service Fund, broadband permitting reform, and federal regulatory alignment drew strong engagement from both providers and policymakers. Led by INCOMPAS policy leadership and legal experts from firms including Morgan Lewis, Cooley, Nelson Mullins Riley & Scarborough LLP, and JSI, these discussions reinforced the critical role of permitting, spectrum access, and funding mechanisms such as BEAD in accelerating equitable broadband deployment nationwide.

Modern Marketing and the Human Element

Beyond infrastructure and policy, the Marketing Workshop Series delivered some of the show’s most actionable insights. The opening session, Marketing’s New Blueprint: Balancing AI, Automation, and Authenticity, featured Laura Johns, Founder and CEO of The Business Growers, and Joy Milkowski, Partner at Access Marketing Company. Together, they explored how communications and technology companies can leverage automation and AI tools without losing the authenticity and strategic clarity required to build trust and drive revenue.

The discussion reinforced that AI should function as a strategic enabler rather than a replacement for human insight. Follow-on workshops expanded on this theme, with sessions focused on revenue-driven AI strategy, practical prompt frameworks, and marketing automation systems designed to align sales and marketing teams while supporting scalable growth.

Networking, Partnerships, and Industry Momentum

As always, the INCOMPAS Show excelled as a venue for relationship-building and deal-making. The Buyers Forum and Deal Center facilitated high-value, pre-scheduled meetings, while exhibit hall programming and networking events fostered collaboration across fiber providers, technology vendors, and service partners.

Workforce development, sustainability, and inclusion also emerged as shared priorities. Speakers stressed the need to build talent pipelines capable of supporting AI-driven networks while ensuring that digital transformation delivers measurable benefits across communities.

The Road Ahead

The 2025 INCOMPAS Show made one thing clear: the future of communications will be defined by integration, collaboration, and adaptability. From AI-powered networks and evolving policy frameworks to authentic marketing and workforce readiness, the conversations in Tampa reflected an industry actively shaping its next chapter.

As the ecosystem looks toward 2026, the momentum from INCOMPAS reinforces a collective commitment to closing connectivity gaps, modernizing infrastructure, and aligning innovation with opportunity.

To learn more about INCOMPAS and upcoming events, visit www.incompas.org and www.show.incompas.org.

The post Inside the 2025 INCOMPAS Show and the Convergence of Policy Infrastructure and AI appeared first on Data Center POST.

Reflecting on a Year of Global Growth at Datalec Precision Installations

19 December 2025 at 13:30

As 2025 comes to a close, Tim Hickinbottom, Head of Strategic Accounts at Datalec Precision Installations (DPI), is reflecting on a milestone year both personally and professionally. With nearly four decades in the digital infrastructure and technology sector, Hickinbottom’s perspective offers insight into how experience, adaptability, and long-term vision continue to shape growth in an evolving industry.

A Career Built on Experience and Adaptability

Hickinbottom’s career began in 1986 at Compucorp and includes formative years in the Royal Navy and with British Aerospace in Saudi Arabia. These early experiences helped shape a leadership approach grounded in resilience, discipline, and adaptability. These are qualities that remain critical as data center and mission-critical services grow more complex and globally connected.

A Defining Year 

In 2025, DPI sustained its year-on-year growth while expanding into new regions. The launch of operations in APAC, continued momentum in the Middle East, and steady growth across Europe marked one of the company’s busiest periods to date. By year-end, DPI expects to operate 23 entities worldwide, with further expansion already underway.

According to Hickinbottom, this progress reflects both strong market demand and a deliberate strategy focused on operational discipline and long-term stability.

Strategy, Engagement, and Sustainability

Behind the visible growth is a leadership team focused on reinvestment and sustainable expansion. While much of this work occurs behind the scenes, evolving strategies and internal alignment are shaping DPI’s direction.

Throughout the year, DPI reinforced its global presence at major industry events including Datacentre World and GITEX conferences across multiple regions. At the same time, the company advanced its sustainability efforts, earning recognition from CDP and EcoVadis and preparing to share its Science Based Targets.

“These initiatives matter deeply to our clients and partners,” Hickinbottom notes, emphasizing accountability and environmental stewardship as core elements of industry leadership.

Looking Ahead to 2026

As DPI looks toward 2026, Hickinbottom remains optimistic about the challenges and opportunities ahead. With hard work embedded in the company’s culture and a clear focus on innovation, DPI is positioned to continue supporting data center operators and digital infrastructure stakeholders worldwide.

“Work should be enjoyable,” Hickinbottom reflects. “It’s been an incredible journey so far, and I’m excited for what’s next.”

To explore Hickinbottom’s full reflections on 2025 and his perspective on the year ahead, read the complete blog on Datalec Precision Installations’ website here.

The post Reflecting on a Year of Global Growth at Datalec Precision Installations appeared first on Data Center POST.

Datalec Precision Installations Earns ‘B’ Score from CDP, Reinforcing Commitment to Environmental Transparency

18 December 2025 at 15:30

In an era where sustainability is no longer just a buzzword but a business imperative, the data center industry is under increasing pressure to demonstrate measurable environmental progress. Datalec Precision Installations (DPI), a provider of world-class global data center design, supply, build, and installation services, has taken a significant step in this direction. The company announced this week that it has been recognized for its transparency on environmental issues with a ‘B’ score from CDP Worldwide, the global non-profit that runs the world’s leading environmental disclosure system.

A Benchmark for Transparency

DPI’s ‘B’ rating in the climate change category places it among a select group of organizations demonstrating “Management” level stewardship. This score indicates that Datalec is not just aware of its environmental impact but is taking coordinated action on climate issues.

The achievement is notable given the rigour of the CDP process. In 2025, nearly 20,000 companies were scored, with CDP’s methodology widely considered the gold standard for corporate environmental reporting. By aligning with the Task Force on Climate-related Financial Disclosures (TCFD) framework, CDP scores are a critical metric for the 640 institutional investors – representing over $127 trillion in assets – who use this data to inform their investment and procurement decisions.

Driving an Earth-Positive Economy

For the data center sector, where Scope 3 emissions and supply chain transparency are critical challenges, DPI’s disclosure represents a commitment to the future.

“We are proud to receive a B score from CDP, which is a meaningful recognition of the tireless and consistent work of our entire team towards achieving our ESG goals,” said Tim Hickinbottom, DPI ESG Group Lead. “Transparency and accountability are at the heart of our sustainability strategy, and this result reflects our commitment to driving positive environmental impact. While we celebrate this milestone, we remain focused on continuous improvement and advancing sustainable practices.”

The Importance of Disclosure

Sherry Madera, CEO of CDP, emphasized that these scores are about more than just accolades. They are about future-proofing operations. “A CDP score is a sign of commitment to high-quality data that enables companies to take earth-positive economic decisions,” Madera noted. “Tackling environmental risks head-on will create a more resilient economy and increase companies’ ability to innovate and invest.”

To learn more about Datalec’s services and sustainability initiatives, visit www.datalecltd.com.

The post Datalec Precision Installations Earns ‘B’ Score from CDP, Reinforcing Commitment to Environmental Transparency appeared first on Data Center POST.

Now and Going Nuclear: Powering the Next Generation of Data Centers

17 December 2025 at 16:00

Insights from ASG, Oklo Inc., Switch, and Equinix

Why Nuclear Energy is Back in the Data Center Conversation

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, one of the most talked-about sessions was “Now and Going Nuclear.” The discussion explored how nuclear energy, long viewed as complex and controversial, is rapidly emerging as a viable solution for powering the data center industry’s next phase of growth.

Moderated by Daniel Golding, CTO of ASG, the panel featured Brian Gitt, Senior Vice President of Business Development at Oklo Inc.; Jason Hoffman, Chief Strategy Officer at Switch; and Philip Read, Senior Director of Product Management at Equinix. Together, they examined how technology, regulation, and market forces are aligning to make small modular reactors (SMRs) and nuclear-derived power a credible and necessary part of the digital infrastructure ecosystem.

A Generational Shift in Nuclear Perception

Daniel Golding opened the discussion by highlighting how dramatically attitudes toward nuclear energy have changed in recent years. “The political opposition has evaporated entirely in the past three to four years,” Golding observed. “What’s happened is a generational change. For younger generations who’ve grown up in a world shaped by climate change, nuclear risk seems modest compared to the risk of inaction.”

This generational shift, Golding noted, is paving the way for new conversations around nuclear deployment, not just as an energy option, but as an environmental imperative. The narrative has moved from “if” to “when,” setting the stage for nuclear integration into the world’s largest digital infrastructure operations.

Policy Momentum and Market Acceleration

Brian Gitt of Oklo described how a wave of regulatory and policy reforms has transformed the U.S. nuclear landscape in just the last year. “Since May, the federal government has released a series of executive orders removing barriers, unlocking fuel supply, and streamlining licensing,” Gitt said. “The NRC is now required to approve reactor applications within 18 months, and the DOE is opening federal lands for AI factories and power infrastructure.”

Gitt also announced that Oklo is leading construction on a $1.68 billion fuel recycling facility in Oak Ridge, Tennessee, the first of its kind in the U.S., designed to convert spent fuel into usable energy. “We’re taking what used to be seen as waste and turning it into 24/7 baseload power,” he explained. “We’ve moved from vision to execution, and the timeline from now to nuclear is about three years.”

Designing for a Nuclear-Powered Future

Jason Hoffman of Switch spoke to how data center design must evolve to integrate nuclear energy at the gigawatt scale. “When we talk about AI factories, we’re talking about facilities that are five times larger than what we’ve traditionally built,” Hoffman said. “These are sites measured in hundreds of acres, with power demand comparable to naval-scale energy systems. Nuclear makes that scale possible.”

He added that Switch and other major operators are actively exploring how to integrate self-generated nuclear power into future campuses. “It’s not just about access to power,” Hoffman said. “It’s about reliability, control, and sustainability. Nuclear enables all three.”

Philip Read of Equinix echoed this point from a customer perspective, emphasizing that clients want certainty. “Our customers want confidence in their power supply, growth strategy, and sustainability goals,” Read said. “They’re asking, ‘Do we need a different strategy for locations and energy sources?’ Nuclear provides that line of sight.”

Security, Scale, and Sustainability

The conversation also touched on key challenges. When asked what keeps him up at night, Hoffman was quick to answer: “Security posture.” Hoffman noted that as nuclear and data centers intersect, ensuring robust cybersecurity and operational safety will be critical.

Gitt added that misconceptions about nuclear waste remain one of the industry’s biggest hurdles. “We have enough stored fuel in the U.S. to power the country for generations,” Gitt said. “It’s not dangerous, it’s energy waiting to be unlocked. We’re sitting on the equivalent of five Saudi Arabias of energy, and we’re burying it instead of using it. That needs to change.”

Golding agreed, noting that for decades, the U.S. has stored waste in temporary pools, a model that is no longer scalable. The consensus: recycling and reusing fuel through modern SMRs is not only possible but essential.

Economic and Community Impact

Beyond technical feasibility, the panel highlighted the broader economic upside of nuclear development. Gitt shared that Oklo’s projects are already generating significant local economic benefits. “We just broke ground in Iowa, and the job creation has been incredible,” Gitt said. “This isn’t just energy innovation, it’s economic revitalization. Communities are competing to host these facilities because they bring skilled jobs, tax revenue, and long-term prosperity.”

Hoffman and Read both agreed that pairing nuclear generation with data center campuses could redefine industrial development in the U.S. “These are long-term, high-value assets,” Hoffman said. “They’re not speculative, they’re the backbone of America’s digital and economic future.”

From Renewable to Reliable: The Role of Baseload Power

Golding raised the question of whether hyperscalers are ready to embrace nuclear as part of their sustainability strategies. Gitt’s answer was unequivocal: “Every major hyperscaler now includes nuclear in their long-term power roadmap. It’s part of the equation for net-zero.”

Gitt noted that nuclear has the smallest materials footprint of any energy source, smaller even than wind or solar; making it one of the most resource-efficient options available. “If we want to keep the lights on and cut emissions, there’s really no alternative,” Gitt said. “The data center industry has realized that nuclear isn’t optional, it’s inevitable.”

From Vision to Reality

The panel made clear that the intersection of nuclear energy and data center infrastructure is no longer theoretical. Regulatory pathways are opening, commercial projects are underway, and the industry’s largest power consumers are preparing to integrate nuclear into their long-term sustainability and capacity strategies.

As Golding concluded, “This isn’t a thought experiment anymore. It’s happening. By the end of the decade, nuclear will be powering data centers, and helping our industry lead the global energy transition.”

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas.  Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Now and Going Nuclear: Powering the Next Generation of Data Centers appeared first on Data Center POST.

Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027

17 December 2025 at 14:00

As demand for AI, cloud, and hyperscale infrastructure accelerates across Europe, Nostrum Data Centers is advancing a new generation of sustainable, high-performance data center assets in Spain, with availability beginning in 2027.

The Spain-based developer is delivering more than 500 MW of IT capacity, supported by secured land and power, enabling customers to move quickly from planning to deployment. With 300 MW of power already secured and scalable to 500 MW, Nostrum is addressing Europe’s growing need for resilient, efficient digital infrastructure.

Earlier this month, Nostrum Data Centers, part of Nostrum Group, recently announced that AECOM will design and manage its $2.1 billion data center campus in Badajoz, one of six strategically located developments across the country. These sites leverage Spain’s strong subsea connectivity, competitive energy costs, and robust power availability to support scalable growth.

“Our Spain-based data centers combine strategic site selection, secured power connections, and AI-ready infrastructure to meet the demands of the next-generation digital economy,” said Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “Our team of industry leaders with over 25 years of experience are developing facilities that are not only highly efficient and scalable but also fully sustainable, supporting both our customers’ growth and global climate goals.”

Engineered for high-density AI and cloud workloads, Nostrum’s facilities are designed to achieve a PUE of 1.1 and a WUE of zero, eliminating water usage for cooling. Collectively, the developments are expected to prevent 10 million metric tonnes of CO2 emissions, aligning with the United Nations Sustainable Development Goals.

Nostrum’s 2027 delivery timeline reinforces its commitment to providing efficient, future-ready infrastructure across Spain for AI, cloud, and hyperscale customers.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027 appeared first on Data Center POST.

ZincFive Earns Four Wins at the 2025 Power Technology Excellence Awards

11 December 2025 at 18:30

ZincFive® has closed out 2025 with major industry recognition, earning top honors in the 2025 Power Technology Excellence Awards across four categories: Innovation, Product Launch, Safety, and Environmental Excellence. Powered by GlobalData’s business intelligence, the awards celebrate companies pushing the global power sector forward, and this year’s results underscore ZincFive’s accelerating leadership.

The wins reflect the company’s momentum as demand for high-power, low-impact energy storage solutions continues to intensify. With nearly 2 gigawatts of nickel-zinc (NiZn) systems deployed or contracted worldwide, ZincFive is helping operators meet the explosive requirements of AI-driven data centers while strengthening resilience and reducing environmental impact.

At the center of this progress is ZincFive’s Immediate Power Solutions portfolio, which blends patented NiZn chemistry with intelligent system-level engineering. These systems deliver millisecond-level responsiveness to dynamic loads and operate reliably at higher temperatures, reducing cooling requirements and improving overall efficiency. The award-winning BC 2 AI UPS Battery Cabinet extends this approach even further, providing fast-load support for GPU-intensive AI applications and traditional outage protection in a single compact system. By consolidating functions that once required multiple layers of equipment, it frees valuable white space and simplifies power architecture.

ZincFive’s wins also reinforce the company’s long-standing commitment to safety and sustainability. NiZn technology is inherently safe, built from abundant, recyclable materials and provides lifetime greenhouse gas emissions that are 25 to 50 percent lower than traditional lead-acid and lithium-ion options. This aligns with growing industry expectations for cleaner, more responsible power infrastructure.

These latest honors join a growing list of accolades, including recent recognition on TIME’s 2025 World’s and America’s Top GreenTech Companies lists, the 2024 Edison Award™, CleanTech Breakthrough’s 2024 Overall Innovation of the Year, and more, signaling a defining moment for ZincFive as it continues to set new benchmarks in mission-critical power.

To learn more, reach the full release here.

The post ZincFive Earns Four Wins at the 2025 Power Technology Excellence Awards appeared first on Data Center POST.

Why AI Still Needs People: The Workforce Behind the Machines

11 December 2025 at 15:00

As artificial intelligence accelerates across global data centers, conversations often focus on compute, power density, and next-generation infrastructure. But according to Nabeel Mahmood, Strategic Advisor at ZincFive and Brandon Smith, Vice President of Global Sales and Product at ZincFive, the most crucial element of AI scalability isn’t hardware. It’s people.

Moderated by Ilissa Miller, CEO of iMiller Public Relations, this webinar uncovered why the AI workforce, not compute, is the true limitation and what must change for sustainable growth.

People Are the Real Bottleneck in AI Scalability

Mahmood explained that scaling AI isn’t just a matter of adding more servers or GPUs. It requires practitioners who understand data pipelines, model governance, operational resiliency, and infrastructure design. Without skilled talent, organizations face operational risks despite abundant compute. Smith highlighted that AI and machine learning job postings have increased significantly, noting a recent figure showing a 450 percent rise, far outpacing available expertise.

Technical Silos Are Creating a New Skills Crisis

The discussion emphasized a growing gap across disciplines. Electrical, mechanical, IT, and data science teams frequently operate in isolation despite the interdependent nature of modern AI data centers. This fragmentation leads to delays, inefficiencies, and architectures unable to handle today’s dynamic workloads. Smith described the shift from traditional “white space versus black space” to today’s “blended gray space”, where cross-functional knowledge is essential. Mahmood added that the inability to transfer knowledge horizontally and vertically across teams is a major obstacle to scaling AI systems.

Energy Innovation Is Essential for AI Expansion

AI’s spiking, unpredictable workloads challenge a grid that was never designed for ultra-dense compute. Mahmood and Smith both pointed to advanced energy storage solutions, including ZincFive’s high-power nickel-zinc technology, as the key to unlocking performance. These innovations smooth electrical spikes, maximize usable capacity, and support emerging off-grid compute models that reduce dependence on constrained utilities.

Preparing the Future AI Workforce

Both speakers agreed that organizations must treat talent as core infrastructure. That means forecasting future skills, investing in upskilling programs, partnering with universities, and fostering environments where engineers can innovate across disciplines. As Smith noted, the strongest teams of tomorrow will be adaptive, coachable, and ready to evolve alongside rapidly changing AI infrastructure demands.

Watch the webinar below:

The post Why AI Still Needs People: The Workforce Behind the Machines appeared first on Data Center POST.

AI’s Impact on Global Market Expansion Patterns: How Artificial Intelligence Is Redefining the Future of Global Infrastructure

9 December 2025 at 16:00

At infra/STRUCTURE Summit 2025, industry leaders from Inflect, NTT and NextDC explored how AI is accelerating development timelines, reshaping deal structures, and redrawing the global data center map.

The infra/STRUCTURE Summit 2025, held at The Wynn Las Vegas from October 15–16, 2025 convened the brightest minds in digital infrastructure to explore the seismic shifts underway in the age of artificial intelligence. Among the most forward-looking sessions was “AI Impact on Global Market Expansion Patterns,” a discussion that unpacked how AI is transforming where and how data centers are developed, financed, and operated worldwide.

Moderated by Swapna Subramani, Research Director, IMEA, for Structure Research, the panel featured leading executives including Mike Nguyen, CEO, Inflect; Steve Lim, SVP, Marketing & GTM, NTT Global Data Centers; Craig Scroggie, CEO and Managing Director, NEXTDC. Together, they examined how the explosive demand for AI compute power is pushing developers to rethink long-held assumptions about geography, energy, and risk.

AI Is Rewriting the Rules of Global Expansion

For decades, site selection decisions revolved around a handful of core variables: power cost, connectivity, and proximity to major user populations. But in 2025, those rules are being rewritten by the unprecedented scale of AI workloads.

Regions once considered secondary are suddenly front-runners. Scroggie noted how saturation in markets like Singapore and Hong Kong has forced expansion across Thailand, Indonesia, Malaysia, and India, each now racing to deliver power, land, and permitting capacity fast enough to attract global hyperscalers.

“You can’t build large campuses in Singapore anymore,” Scroggie said. “But throughout Southeast Asia, we’re seeing rapid acceleration as operators balance scale, sustainability, and access to emerging population centers.”

The panelists agreed that energy constraints, not capital, are now the primary limiting factor. “The short term is about finding locations where power exists at scale,” explained Scroggie. “The longer-term challenge is developing new storage and generation models to make that power sustainable.”

Geopolitics and Sovereignty Are Shaping Investment

AI’s global reach has also brought geopolitics and national sovereignty to the forefront of infrastructure strategy.

“We’re living in more challenging times than ever before,” said Nguyen, referencing chip export restrictions and international trade interventions. “AI is no longer just a technological conversation, it’s a matter of national defense and economic competitiveness.”

He noted that ongoing trade restrictions with China are reshaping who gets access to advanced chips and where they can be deployed. “The combination of geopolitical and local legislative pressures determines the future of global trade management,” Nguyen said.

As countries strengthen data sovereignty and privacy laws, regional differentiation is intensifying. “Every geography has a different view,” Nguyen continued. “Some nations are creating frameworks to enable AI and cross-border data sharing, others are locking down their ecosystems entirely.”

Scroggie echoed this, adding that sovereignty-driven strategies are driving a surge in localized buildouts. “We’re seeing more countries push to ensure domestic control of digital assets,” he said. “That’s changing the structure of global supply chains and creating ripple effects that extend well beyond national borders.”

The Industry’s Race Against Time

The conversation turned toward construction velocity, a challenge every developer feels acutely.

“Are we building fast enough?” Subramani, the moderator of the conversation asked.

“Simply put, no,” said Scroggie. “We can’t keep up with demand. Traditional 12-to-24-month build cycles no longer align with AI’s acceleration curve. We have to find a way to build differently.”

The group discussed the need for new modular construction methods, accelerated permitting, and AI-assisted project management to meet scale and speed requirements.

Nguyen framed it within the broader context of industrial history. “We are standing at the dawn of the next industrial revolution,” he said. “Just as steam, electricity, and the internet reshaped economies, AI will redefine global competitiveness. The countries that can deliver sustainable, affordable power will lead.”

He pointed to the “Jacquard Paradox” of AI infrastructure: the more intelligence we produce, the cheaper it becomes, and the more of it the world demands. “The hallmark of global competitiveness will be the unit cost of producing intelligence,” Nguyen explained. “That requires deep collaboration between developers, energy providers, and governments.”

Evolving Deal Structures Reflect a More Complex Market

The financial framework of data center development is also changing dramatically. Traditional “build-to-suit” models are giving way to more creative, multi-tiered partnerships as both hyperscalers and institutional investors seek flexibility and risk mitigation.

“There’s a diversity of players now entering the market, some with deep operational experience, others completely new to the space,” said Scroggie. “Everyone’s chasing the same megawatts, but their risk tolerance and credit profiles vary widely.”

Scroggie also described how education and transparency have become critical. “We’re constantly advising clients on what’s feasible and what’s not. Many are coming in with unrealistic expectations about speed, power, or pricing. It’s part of our job to bridge that gap.”

The consensus was clear: AI-driven demand has transformed data centers from real estate assets into strategic infrastructure platforms, with financial, political, and environmental implications far beyond the industry itself.

Looking Ahead: The Next Decade of AI-Driven Infrastructure

As the discussion drew to a close, the panelists reflected on the extraordinary pace of change. “AI is not replacing, it’s additive,” said Scroggie. “Every new workload, every new inference model adds demand. The scale we’re dealing with is unprecedented.”

In this new era, speed, sustainability, and sovereignty are the defining dimensions of competitiveness. The industry’s success will hinge on its ability to innovate faster than the challenges it faces, whether those are regulatory, environmental, or geopolitical.

“We’re building the highways of the digital era,” said Nguyen in closing. “And like every industrial revolution before it, those who solve the energy equation will lead the world.”

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, received all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas.  Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post AI’s Impact on Global Market Expansion Patterns: How Artificial Intelligence Is Redefining the Future of Global Infrastructure appeared first on Data Center POST.

Redefining Investment and Innovation in Digital Infrastructure

9 December 2025 at 14:00

How new entrants are reshaping data center operations, capital models, and sustainable development

At the infra/STRUCTURE Summit 2025, held October 15–16 at The Wynn Las Vegas, one of the most engaging conversations explored how a new generation of operators is reshaping the data center landscape.

The session, “New Operating Platforms,” moderated by Philbert Shih, Managing Director of Structure Research, brought together executives leading some of the most innovative digital infrastructure ventures: Ernest Popescu, CEO of Metrobloks Data Centers; Eanna Murphy, Founder and CEO of Montera Infrastructure; and Chuck McBride, CEO of Atmosphere Data Centers.

Together, they discussed how new business models, evolving capital structures, and sustainability commitments are redefining what it means to operate in the fast-changing world of digital infrastructure.

Identifying Gaps in a Rapidly Evolving Market

Shih opened the discussion by noting that the surge in investment across digital infrastructure has created room for new operating platforms to emerge.

“The industry has arguably over-indexed on hyperscale and colocation,” Shih said. “But the opportunity now lies in the gaps, in the diverse mix of services, geographies, and market segments that remain underserved.”

He challenged the panelists to explore how their platforms are addressing those gaps, and what kinds of efficiencies or innovations are shaping their approach.

Building for Speed and Efficiency

Murphy described his company’s focus on secondary and emerging markets, areas where demand is strong but infrastructure capacity has lagged.

“We wanted to look at regions where enterprise customers were underserved,” Murphy said. “Our model focuses on connecting Tier 2 cities and surrounding areas, delivering capacity closer to users and creating new connectivity ecosystems.”

Murphy emphasized that Montera’s approach is designed for speed and scale, combining pre-engineered designs and local partnerships to accelerate delivery.

“Even in smaller markets,” Murphy said, “you can build meaningful density if you plan it right and align with community needs.”

Balancing Capital, Capacity, and Time-to-Market

Popescu noted that access to capital remains one of the biggest hurdles for new operators, especially those outside traditional hyperscale markets.

“There’s plenty of opportunity in the market, but capital deployment still comes down to risk tolerance and timing,” Popescu said. “You can’t shortcut power availability, but you can manage time-to-market with flexible models and smart partnerships.”

Metrobloks focuses on developing scalable, self-performable campuses in underserved markets, combining modular design with utility partnerships to bring new capacity online faster.

“It might not be massive by hyperscale standards,” Popescu said. “But for our customers, being able to access distribution power in 12 to 18 months can make all the difference.”

Sustainability and the Next Generation of Infrastructure

For McBride, sustainability and long-term adaptability are at the heart of his company’s strategy.

“We made a conscious choice not to inherit legacy assets,” McBride said. “Instead, we’re building brand-new AI-ready campuses in underserved markets, what we call next-generation training centers.”

Atmosphere’s developments prioritize renewable energy integration and community revitalization. McBride described projects that convert industrial land, such as former power plant sites, into modern digital campuses.

“We’re taking coal-fired sites and turning them into green campuses,” McBride said. “It’s about giving these sites a second life while meeting the demands of AI and high-performance computing.”

Adapting to Changing Technology Cycles

The conversation turned to how operators are preparing for rapid changes in compute and chip technology, particularly as AI drives unprecedented density and cooling requirements.

Murphy noted the growing challenge of aligning long-term infrastructure planning with short hardware cycles.

“Every six months we’re seeing new chip architectures from NVIDIA, AMD, and others,” Murphy said. “But the data center development cycle is still three to five years. The challenge is designing for what’s next without overcommitting to what’s current.”

Panelists agreed that future-proofing is now a key differentiator, with flexibility, modularity, and liquid cooling readiness built into early designs.

Smarter Capital and Better Collaboration

Reflecting on the evolution of the investment landscape, Popescu shared that today’s capital partners are far more informed about the digital infrastructure asset class than even a few years ago.

“Institutional investors have become much more educated,” Popescu said. “The conversations are smarter, and there’s a better understanding of the balance between cost, speed, and sustainability.”

McBride added that hyperscalers, too, have shown greater willingness to adapt pricing and partnership structures in response to development challenges.

“Three years ago, I had never seen the major cloud players react so quickly,” McBride said. “They know developers are essential to getting capacity online, and that alignment benefits everyone.”

The Opportunity Ahead

In closing, Shih reflected on how the emergence of these new operating platforms is reshaping the broader ecosystem.

“We’re watching the rise of operators who are not just building capacity but reimagining how the industry functions,” Shih said. “They’re bridging the gap between capital, sustainability, and innovation, and that’s what will define the next phase of growth.”

As the digital infrastructure industry continues to evolve, these leaders are demonstrating that success now depends as much on creativity and collaboration as it does on capital and construction.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Redefining Investment and Innovation in Digital Infrastructure appeared first on Data Center POST.

❌