Reading view

Why Water Risk Is the Missing Variable in AI Infrastructure Planning

While power dominates the headlines in AI infrastructure, water is the silent arbiter of project viability. Investors and developers obsess over megawatts and grid capacity, but the reality is that cooling systems are tethered to a resource that is often less predictable and more politically charged. When water or wastewater capacity hits a ceiling, the fallout moves beyond engineering. It triggers permitting stalls, operational interruptions, and structural impairment of asset value.

Across the U.S., municipalities are no longer just providing service; they are becoming the ultimate ‘gatekeepers’ for high-volume users. For instance, Tucson now requires any new or expanding large water user expecting more than 7.4 million gallons per month to submit a conservation plan, undergo public review, and secure City Council approval before accessing Tucson Water.

Marana’s policy further states that Marana Water will not supply potable water to data centers for cooling and requires documentation of an alternate source. In Chandler, the city council unanimously rejected a proposal to rezone land for a 422,000-square-foot AI data center campus after public opposition emphasized water use, noise, and limited local benefit.

Strategically positioned between engineering and financial close, these water policies represent a major ‘blind spot’ for developers. Late-stage discovery of water limitations results in stranded capital and protracted entitlement delays. For modern investors, such water risk is now a primary underwriting variable that can dictate the viability of an entire transaction.

Why Power Is Only Half the Constraint

Power determines how much IT load can be energized, but cooling determines whether that load can operate within temperature limits on peak summer days. Cooling design also determines whether the site depends on local water, meaning the true constraint is rarely singular.

Data centers typically rely on one of two primary heat rejection approaches.

Evaporative systems, such as cooling towers, remove heat through water evaporation. This requires continuous makeup water to replace evaporative loss and generates blowdown to control mineral concentration. Blowdown becomes a wastewater stream, tying the facility to sewer capacity, discharge regulations, and pretreatment requirements.

Dry systems, such as air-cooled chillers and dry coolers, reduce direct on-site water consumption but increase electrical demand as outdoor temperatures rise, particularly during summer peaks. That shift moves the constraint toward grid capacity and power pricing during the very hours when electricity is most expensive and constrained. In both configurations, the constraint does not disappear but shifts, and each approach carries a distinct exposure profile that must be evaluated at the basin and grid level.

Inside the Water Footprint of AI Data Centers

Water exposure extends beyond the visible intake line and is often more complex than initial site reviews suggest.

In tower-based systems, make-up water demand rises as ambient temperatures increase because more heat must be rejected during peak hours. Blowdown volumes also rise, increasing steady wastewater discharge. In many jurisdictions, wastewater capacity determines viability before raw water supply does. Dissolved solids and treatment chemistry can trigger pretreatment mandates or exceed plant acceptance thresholds, creating operational bottlenecks that were not modeled at the outset.

The true water footprint of an asset is often obscured by ‘siloed’ diligence. While a facility might minimize on-site usage, it remains tethered to the water intensity of the local energy mix—a dependency that creates a hidden risk during peak demand. Because most models consider water, power, and wastewater as isolated variables, the full scale of the water-energy nexus is rarely consolidated. This leaves the project exposed to systemic failure points that only become visible late in the development cycle.

Why Water Risk Is Frequently Mispriced

The assumption that water is a stable, predictable utility is a significant blind spot in traditional underwriting. Standard diligence often stops at a letter of intent from a provider, ignoring regulatory contingencies—such as recycled water mandates or peak-heat restrictions—that govern high-intensity facilities. Failing to account for these municipal requirements leads to Capex volatility and structural delays, turning a simple utility expense into a primary threat to projected returns.

At a portfolio level, aggregated corporate reporting can obscure localized exposure. Average water intensity metrics do not reveal whether specific assets sit in basins facing physical scarcity or wastewater systems operating near capacity. Valuations that assume perpetual expansion can fail at the local level when additional allocation is unavailable, undermining long-term growth assumptions embedded in underwriting models.

From Environmental Constraint to Financial Exposure

Water risk tends to accumulate over time, moving through operations, regulation, and local politics until it becomes a real constraint on performance.

For operators, the first pressure points are often summer peaks, when supply limits tighten and water quality can swing at the exact moment cooling systems are working hardest. This dilemma then leads to emergency operational changes that pull maintenance forward, or take short outages. Ultimately, the revenue impact of those decisions is usually disproportionate to the duration of the disruption.

For developers, on the other hand, regulatory shifts can trigger midstream redesigns. A project engineered around potable water may be required to transition to reclaimed supply, adding infrastructure, storage, and treatment complexity after capital has already been committed.

Public opposition at the local level introduces political friction that stalls approvals and compounds reputational risk. Contentious infrastructure upgrades can derail project schedules and force unfavorable cost-sharing renegotiations. Collectively, these municipal factors feed into underwriting through increased delay risk, Capex volatility, and a diminished capacity for long-term expansion.

What Needs to Change in Infrastructure Planning

Water must be evaluated at the same stage as power during site screening and early design.

A simple confirmation of water availability is no longer sufficient. Basin-level allocation rules, drought contingency plans, wastewater capacity, discharge quality requirements, and embedded grid water intensity must be assessed before engineering assumptions are finalized.

Every investment memo and design review should include a transparent water balance that identifies source type, volume requirements, discharge pathways, and regulatory triggers under peak conditions. This allows engineering and underwriting teams to evaluate exposure in parallel rather than sequentially.

Water limits are now shaping asset values in a direct, measurable way. Resilience starts with expansion plans that can hold up under tighter supply caps, and with capital that funds backup sourcing options and protection against shifting rules. Financing and insurance need to move to basin-by-basin risk models, because water availability is already the deciding factor in approvals and the constraint that most reliably dictates whether an asset can keep performing over time.

# # #

About the Author

Dr. Vian Sharif is the Founder and President of NatureAlpha, an AI-first fintech platform delivering science-based environmental risk insights across nearly $3 trillion in assets under management. With 20 years of experience at the intersection of finance, technology, and sustainability, she also serves as Head of Sustainability at FNZ Group and is a global advisor on nature-aligned investing. She holds a PhD in Environmental Behavior Change and was recognized with a 2025 Fin-Earth Award for Natural Capital and Biodiversity.

The post Why Water Risk Is the Missing Variable in AI Infrastructure Planning appeared first on Data Center POST.

  •  

AI Data Centers Are Ready to Explode, If the Grid Can Keep Up

Having spent most of my career at the nexus of power generation and industrial infrastructure, I can safely say that few things have stressed the American electric grid quite like the explosive growth in AI-driven data centers. At Industrial Info Resources, we are currently tracking more than $2.7 trillion in data center projects worldwide, including more than $1 trillion in new US investment in just nine short months.

It is not only technology that faces a skyrocketing demand; it’s about electricity. With its voracious power appetite, artificial intelligence is making plain just how unprepared the aging US power grid is for the next major step in technological evolution.

AI’s Appetite for Power

The amount of computational power AI requires is astonishing. More than 700 million new users have gone online in the past year alone, and according to estimates by OpenAI, global compute demand could soon require a gigawatt of new capacity every week. That is roughly one big power station every seven days.

We are already seeing the ramifications in our project data at IIR Energy. A large number of the biggest hyperscale projects are reaching major capacity bottlenecks: utilities in some areas are telling data center operators they won’t be able to provide additional megawatts until as late as 2032. A few years ago, that kind of delay was unthinkable.

Limits like these are forcing developers to think out of the box when considering data center construction locations. No longer are they concentrating on central metro areas, but they are gravitating towards areas around transmission interconnections, wind or solar parks, or even existing industrial areas that are already served by substations.

The New York Independent System Operator’s Comprehensive Reliability Plan, or CRP, predicts impending power shortages across the state. It identifies three key challenges that are occurring at once: an older generation fleet, fast-rising loads from data centers and chip plants, and new hurdles to building supply. It’s a confluence of threats that are straining reliability planning to its limits.

An Outdated Grid Meets a $40 Trillion Market

With electricity demand having been stagnant for the past few years, improvements to the country’s collective power grid have not been prioritized. This recent rebound in load is meeting a grid that’s already congestion-prone and aging. Some regions face record-breaking congestion pricing and curtailment. Last week, PJM (the largest regional electricity transmission organization in the United States) saw wholesale capacity auction power prices jump roughly 800%.

This serves as a powerful reminder that while the digital economy proceeds at light speed, physical infrastructure doesn’t. Transmission upgrades require years to approve and construct, and generation projects may be held back by supply chains or local policy barriers. AI’s future, as grand as it is, now hinges on how fast we will upgrade physical systems that enable it.

Behind the Meter: The New Energy Strategy

Confronted with delayed delivery schedules and lengthy interconnection queues, data center builders are taking control themselves. Increasingly, they are making investments in “behind-the-meter” options to guarantee access to the power they require. They are considering natural gas turbines, high-end fuel cells, as well as extended renewable contracts that come with a direct path to generation independent of having to wait for upgrades from utilities. Technologies for liquid cooling are helping data center operators decrease freshwater consumption as they improve efficiency.

Data centers are no longer simple consumers of power. Increasingly, they are becoming power collaborators, in some instances, power generators. Utilities are adapting by teaming with developers to co-develop generation assets or reassessing baseload integrity. Next-generation designs are on track to reach a megawatt or more per rack by 2029.

Why Reliable Intelligence Matters

In a market changing this rapidly, it’s crucial to have reliable information. And that’s where IIR Energy offers a distinct edge. We follow projects from initial planning to evaluation and refinement, tracking every milestone and closely watching the power fundamentals that influence success.

This transparency allows utilities, investors, and developers to discern actual development from rumors. For example, whereas some reports indicate that big builds for data centers are decreasing, our intelligence indicates just the opposite. The buildout continues to accelerate and spread, transitioning to different areas and different forms of power delivery.

Reliable, corroborated information allows decision-makers to know exactly where expansion is occurring as well as the limitations that will hinder it. This is the basis of business at IIR Energy. We offer insight capable of piercing the din to predict how AI, energy, and infrastructure will continue to develop side by side by side.

All in all, this goes to remind us of a simple yet powerful reality: the AI power race will not just be about smarter algorithms. We’ll need smarter infrastructure to match.

# # #

About the Author

Britt Burt is the Vice President of Power Industry Research at IIR Energy, bringing nearly 40 years of expertise across the power, energy, and data center sectors. He leads IIR’s power research team, overseeing the identification and verification of data on operational and proposed power plants worldwide. Known for his deep industry insight, Britt plays a key role in keeping global energy intelligence accurate and up to date.

The post AI Data Centers Are Ready to Explode, If the Grid Can Keep Up appeared first on Data Center POST.

  •