Normal view

Received yesterday — 31 January 2026

Schneider Electric Recognized for Continued Sustainability Leadership Across Leading ESG Ratings in 2025

30 January 2026 at 04:56

Schneider Electric’s performance across major global benchmarks reflects consistent progress on climate, social impact and governance Schneider Electric, a global energy technology leader, has once again been recognized by global environmental, social, and governance (ESG) organizations for the strength, consistency, and long-term credibility of its sustainability performance. Schneider Electric achieved ... [continued]

The post Schneider Electric Recognized for Continued Sustainability Leadership Across Leading ESG Ratings in 2025 appeared first on CleanTechnica.

Received before yesterday

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

20 January 2026 at 14:30

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

DC Investors Are Choosing a New Metric for the AI Era

2 December 2025 at 16:00

The conversation around data center performance is changing. Investors, analysts, and several global operators have begun asking a question that PUE cannot answer, how much compute do we produce for every unit of power we consume. This shift is not theoretical, it is already influencing how facilities are evaluated and compared.

Investors are beginning to favor data center operators who can demonstrate not only energy efficiency but also compute productivity per megawatt. Capital is moving toward facilities that understand and quantify this relationship. Several Asian data center groups have already started benchmarking facilities in this way, particularly in high density and liquid cooled environments.

Industry organizations are paying attention to these developments. The Open Compute Project has expressed interest in reviewing a white paper on Power Compute Effectiveness, PCE, and Return on Invested Power, ROIP, to understand how these measures could inform future guidance and standards. These signals point in a consistent direction. PUE remains valuable, but it can no longer serve as the primary lens for evaluating performance in modern facilities.

PUE is simple and recognizable.

PUE = Total Facility Power ÷ IT Power

It shows how much supporting infrastructure is required to deliver power to the IT load. What it does not show is how effectively that power becomes meaningful compute.

As AI workloads accelerate, data centers need visibility into output as well as efficiency. This is the role of PCE.

PCE = Compute Output ÷ Total Power

PCE reframes performance around the work produced. It answers a question that is increasingly relevant, how much intelligence or computational value do we create for every unit of power consumed.

Alongside PCE is ROIP, the operational companion metric that reflects real time performance. ROIP provides a view of how effectively power is being converted into useful compute at any moment. While PCE shows long term capability, ROIP reflects the health of the system under live conditions and exposes the impact of cooling performance, density changes, and power constraints.

This shift in measurement mirrors what has taken place in other sectors. Manufacturing moved from uptime to throughput. Transportation moved from mileage to performance and reliability. Data centers, especially those supporting AI and accelerated computing, are now moving from efficiency to productivity.

Cooling has become a direct enabler of compute and not just a supporting subsystem. When cooling performance changes, compute output changes with it. This interdependence means that understanding the relationship between power, cooling capability, and computational output is essential for real world performance, not just engineering design.

PUE still matters. It reflects operational discipline, mechanical efficiency, and the overall metabolism of the facility. What it cannot reveal is how much useful work the data center is actually producing or how effectively it can scale under load. PCE and ROIP fill that gap. They provide a more accurate view of capability, consistency, and return on power, especially as the industry moves from traditional air cooled environments to liquid ready, high density architectures.

The next phase of data center optimization will not be defined by how little power we waste, but by how much value we create with the power we have. As demand increases and the grid becomes more constrained, organizations that understand their true compute per megawatt performance will have a strategic and economic advantage. The move from energy scarcity to energy stewardship begins with measuring what matters.

The industry has spent years improving efficiency. The AI era requires us to improve output.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post DC Investors Are Choosing a New Metric for the AI Era appeared first on Data Center POST.

The Speed of Burn

17 November 2025 at 16:00

It takes the Earth hundreds of millions of years to create usable energy.

It takes us milliseconds to burn it.

That imbalance between nature’s patience and our speed has quietly become one of the defining forces of our time.

All the power that moves our civilization began as light. Every joule traces back to the Big Bang, carried forward by the sun, stored in plants, pressed into fuels, and now released again as electricity. The current that runs through a data center today began its journey billions of years ago…ancient energy returning to motion through modern machines.

And what do we do with it? We turn it into data.

Data has become the fastest-growing form of energy use in human history. We are creating it faster than we can process, understand, or store it. The speed of data now rivals the speed of light itself, and it far exceeds our ability to assign meaning to it.

The result is a civilization burning geological time to produce digital noise.

The Asymmetry of Time

A hyperscale data center can take three to five years to design, permit, and build. The GPUs inside it process information measured in trillionths of a second. That mismatch; years to construct, microseconds to consume, defines the modern paradox of progress. We are building slower than we burn.

Energy creation is slow. Data consumption is instantaneous. And between those two speeds lies a widening moral and physical gap.

When we run a model, render an image, or stream a video, we aren’t just using electricity. We’re releasing sunlight that’s been waiting since the dawn of life to be freed. The electrons are real, finite, and irreplaceable in any human timeframe — yet we treat data as limitless because its cost is invisible.

Less than two percent of all new data is retained after a year. Ninety-eight percent disappears — deleted, overwritten, or simply forgotten. Still, we build ever-larger servers to hold it. We cool them, power them, and replicate them endlessly. It’s as if we’ve confused movement with meaning.

The Age of the Cat-Video Factory

We’ve built cat-video factories on the same grid that could power breakthroughs in medicine, energy, and climate.

There’s nothing wrong with joy or humor. Those things are a beautiful part of being human. But we’ve industrialized the trivial. We’re spending ancient energy to create data that doesn’t last the length of a memory. The cost isn’t measured in dollars; it’s measured in sunlight.

Every byte carries a birth certificate of energy. It may have traveled billions of years to arrive in your device, only to vanish in seconds. We are burning time itself — and we’re getting faster at it every year.

When Compute Outruns Creation

AI’s rise has made this imbalance impossible to ignore. A one-gigawatt data campus, power consumption that once was allocated to the size of a national power plant, can now belong to a single company. Each facility may cost tens of billions of dollars and consume electricity on par with small nations. We’ve reached a world where the scarcity of electrons defines the frontier of innovation.

It’s no longer the code that limits us; it’s the current.

The technology sector celebrates speed: faster training, faster inference, faster deployment. But nature doesn’t share that sense of urgency. Energy obeys the laws of thermodynamics, not the ambitions of quarterly growth. What took the universe 18 billion years to refine (the conversion of matter into usable light) we now exhaust at a pace that makes geological patience seem quaint.

This isn’t an argument against technology. It’s a reminder that progress without proportion becomes entropy. Efficiency without stewardship turns intelligence into heat.

The Stewardship of Light

There’s a better lens for understanding this moment. One that blends physics with purpose.

If all usable power began in the Big Bang and continues as sunlight, then every act of computation is a continuation of that ancient light’s journey. To waste data is to interrupt that journey; to use it well is to extend it. Stewardship, then, isn’t just environmental — it’s existential.

In finance, CFOs use Return on Invested Power, ROIP to judge whether the energy they buy translates into profitable compute and operational output. But there’s a deeper layer worth considering: a moral ROIP. Beyond the dollars, what kind of intelligence are we generating from the power we consume? Are we creating breakthroughs in medicine, energy, and climate, or simply building larger cat-video factories?

Both forms of ROIP matter. One measures financial return on electrons; the other measures human return on enlightenment. Together, they remind us that every watt carries two ledgers: one economic, one ethical.

We can’t slow AI’s acceleration. But we can bring its metabolism back into proportion. That begins with awareness… the humility to see that our data has ancestry, that our machines are burning the oldest relics of the cosmos. Once you see that, every click, every model, every watt takes on new weight.

The Pause Before Progress

Perhaps our next revolution isn’t speed at all. Perhaps it’s stillness, the mere ability to pause and ask whether the next byte we create honors the journey of the photons that power it.

The call isn’t to stop. It’s to think proportionally.

To remember that while energy cannot be created or destroyed, meaning can.

And that the true measure of progress may not be how much faster we can turn power into data, but how much more wisely we can turn data into light again.

Sunlight is the power. Data is the shadow.

The question is whether our shadows are getting longer… or wiser.

# # #

About the Author

Paul Quigley is President of Airsys Cooling Technologies. He writes about the intersection of power, data, and stewardship. Airsys focuses on groundbreaking technology with a conscience

The post The Speed of Burn appeared first on Data Center POST.

Faster, Smarter, Modular: The Future According to CDM with Ron Mann

13 November 2025 at 17:00

Originally posted on RC Wireless.

In this episode, Ron Mann, Vice President of Compu Dynamics Modular (CDM), traces his path from pioneering rack and infrastructure design at Compaq to leading innovation in modular data centers tailored for today’s AI-driven demands. With decades of experience in IT and data center evolution, Ron explains how CDM is reshaping infrastructure deployment through fully integrated, application-focused modular systems.

He shares how CDM’s factory-built approach minimizes cost and complexity while supporting high-density AI and inference workloads, particularly at the edge. Ron offers a sharp perspective on how power density shifts—especially post-AI surge—are pushing legacy data center designs beyond their limits and making modular solutions a necessity rather than an alternative.

From leveraging stranded power and deploying AI racks on rooftops, to redefining employee training and fostering a culture of ownership and adaptability, Ron offers candid insights on operational efficiency, talent strategy, and future-proofing infrastructure. His vision for partnerships, performance-first design, and eliminating unnecessary “boxes” in data center architecture reveals how CDM is enabling IT innovation at scale.

To continue reading, please click here.

The post Faster, Smarter, Modular: The Future According to CDM with Ron Mann appeared first on Data Center POST.

AI’s growth calls for useful IT efficiency metrics

The digital infrastructure industry is under pressure to measure and improve the energy efficiency of the computing work that underpins digital services. Enterprises seek to maximize returns on cost outlay and operating expenses for IT hardware, and regulators and local communities need reassurance that the energy devoted to data centers is used efficiently. These objectives call for a productivity metric to measure the amount of work that IT hardware performs per unit of energy.

With generative AI projected to boost data center power demand substantially, the stakes have arguably never been higher. Fortunately, organizations monitoring the performance and efficiency of their AI applications can benefit from experiences in the field of supercomputing.

In September 2025, Uptime Intelligence participated in a panel discussion about AI energy efficiency at the Yotta 2025 conference in Las Vegas (Nevada, US). The panelists drew on their extensive experience in supercomputing to weigh in on discussions around AI training efficiency. They discussed the need for a productivity metric to measure it, as well as a key caveat organizations need to consider.

Organizations such as Uptime Intelligence and The Green Grid have published guidance on calculating work capacity for various types of IT. Software applications and their supporting IT hardware vary significantly, so consensus on a single metric to compare energy performance remains out of reach for the foreseeable future. However, tracking energy performance in a given facility over time is important, and is achievable practically for many organizations today.

Defining AI computing work

The work capacity of IT equipment is needed to calculate its utilization and energy performance when running an application. The Green Grid white paper IT work capacity metric V1 — a methodology provides a methodology for calculating a work capacity value for CPU-based servers. Uptime Intelligence has proposed methodologies to extend this to accelerator-based servers for AI and other applications (see Calculating work capacity for server and storage products).

Floating point operations per second (FLOPS) is a common and readily available unit of work capacity for CPU- or accelerator-based servers. In 2025, an AI server’s capacity usually ranks in the trillions of FLOPS, or teraFLOPS (TFLOPS).

Not all FLOPS are the same

Even though large-scale AI training is radically reshaping many commercial data centers, the underlying software and hardware are not fundamentally new. AI training is essentially one of many applications of supercomputing. Supercomputing software, along with the IT selection and configuration, varies in many ways — and one of the most relevant variables when monitoring energy performance is floating point precision. This precision (measured in bits) is analogous to the number of decimal places used in inputs and outputs.

GPUs and other accelerators can perform 64-, 32-, 16-, 8- and 4-bit calculations, and some can use mixed precision. While a high-performance computing (HPC) workload such as computational fluid dynamics might use 64-bit (“double precision”) floating point calculations for high accuracy, other applications do not have such exacting requirements. Lower precision consumes less memory per calculation — and, crucially, less energy. The panel discussion at Yotta raised an important distinction: unlike most engineering and research applications, today’s AI training and inference calculations typically use 4-bit precision.

Floating point precision is necessary information when evaluating a TFLOPS benchmark. A 64-bit precision calculation TFLOPS value is one-half of a 32-bit TFLOPS value — or one-sixteenth of a 4-bit TFLOPS value. For consistent AI work capacity calculation, Uptime Institute recommends that IT operators use 32-bit TFLOPS values supplied by their AI server providers.

Working it out: work per energy

The maximum work capacity calculation for a server can be aggregated at the level of a rack, a cluster or a data center. Work capacity multiplied by average utilization (as a percentage) produces an estimate of the amount of calculation work (in TFLOPS) that was performed over a given period. Operators can divide this figure by the energy consumption (in MWh) over that same time to yield an estimate of the work’s energy efficiency, in TFLOPS/MWh. Separate calculations for CPU-based servers, accelerator-based servers, and other IT (e.g., storage) will provide a more accurate assessment of energy performance (see Figure 1).

Figure 1 Examples of IT equipment work-per-energy calculations

Diagram: Examples of IT equipment work-per-energy calculations

Even when TFLOPS figures are normalized to the same precision, it is difficult to use this information to draw meaningful comparisons between the energy performance of significantly different hardware types and configurations. Accelerator power consumption does not scale linearly with utilization levels. Additionally, the details of software design will determine how closely real-world application performance aligns with simplified work capacity benchmarks.

However, many organizations can benefit from calculating this TFLOPS/MWh productivity metric and are already well equipped to do so. This calculation is most useful to quantify efficiency gains over time, e.g., from IT refresh and consolidation, or refinements to operational control. In some jurisdictions, tracking FLOPS/MWh as a productivity metric can satisfy some regulatory requirements. IT efficiency is often overlooked in favor of facility efficiency — but a consistent productivity metric can help to quantify available improvements.


The Uptime Intelligence View

Generative AI training is poised to drive up data center energy consumption, prompting calls for regulation, responsible resource use and return on investment. A productivity metric can help meet these objectives by consistently quantifying the amount of computing work performed per unit of energy. Supercomputing experts agree that operators should track and use this data, but they caution against interpreting it without the necessary context. A simplified, practical work-per-energy metric is most useful for tracking improvement in one facility over time.

The following participants took part in the panel discussion on energy efficiency at Yotta 2025:

  • Jacqueline Davis, Research Analyst at Uptime Institute (moderator)
  • Dr Peter de Bock, former Program Director, Advanced Research Projects Agency–Energy
  • Dr Alfonso Ortega, Professor of Energy Technology, Villanova University
  • Dr Jon Summers, Research Lead in Data Centers, Research Institutes of Sweden

Other related reports published by Uptime Institute include:

Calculating work capacity for server and storage products

The following Uptime Institute experts were consulted for this report:

Jay Dietrich, Research Director of Sustainability, Uptime Institute

The post AI’s growth calls for useful IT efficiency metrics appeared first on Uptime Institute Blog.

Is this the data center metric for the 2030s?

When the PUE metric was first proposed and adopted at a Green Grid meeting in California in 2008, few could have forecast how important this simple ratio — despite its limitations — would become.

Few would have expected, too, that the industry would make so little progress on another metric proposed at those same early Green Grid meetings. While PUE highlighted the energy efficiency of the non-IT portion of a data center’s energy use, a separate “useful work” metric was intended to identify how much IT work was being done relative to the total facility and IT energy consumed. A list of proposals was put forward, votes were taken, but none of the ideas came near to being adopted.

Sixteen years later, minimal progress has been made. While some methods for measuring “work per energy” have been proposed, none have garnered any significant support or momentum. Efforts to measure inefficiencies in IT energy use — by far the largest source of both energy consumption and waste in a data center — have constantly stalled or failed to gain support.

That is set to change soon. The European Union and key member states are looking to adopt representative measurements of server (and storage) work capacity — which, in turn, will enable the development of a work per energy or work per watt-hour metric (see below and accompanying report).

So far, the EU has provided limited guidance on the work per energy metric, which it will need to agree in 2025 or 2026. However, it will clearly require a technical definition of CPU, GPU and accelerator work capacity, along with energy-use boundaries.

Once the metric is agreed upon and adopted by the EU, it will likely become both important and widely cited. It would be the only metric that links IT performance to the energy consumed by the data center. Although it may take several years to roll out, this metric is likely to become widely adopted around the world.

The new metric

The EU officials developing and applying the rules set out in the Energy Efficiency Directive (EED) are still working on many key aspects of a data center labeling scheme set to launch in 2026. One area they are struggling with is the development of meaningful IT efficiency metrics.

Uptime Institute and The Green Grid’s proposed work per energy metric is not the only option, but it offers many key advantages. Chief among them: it has a clear methodology; the work capacity value increases with physical core count and newer technology generations; and it avoids the need to measure the performance of every server. The methodology can also be adapted to measure work per megawatt-hour for GPU/accelerator-based servers and dedicated storage equipment. While there are some downsides, these will likely be shared by most alternative approaches.

Full details of the methodology are outlined in the white papers and webinar listed at the end of the report. The initial baseline work — on how to calculate work capacity of standard CPU-based servers — was developed by The Green Grid. Uptime Institute Sustainability and Energy Research Director Jay Dietrich extended the methodology to GPU/accelerator-based servers and dedicated storage equipment, and expanded it to calculate the work per megawatt-hour metric.

The methodology has five components:

  • Build or access an inventory of all the IT in the data center. The required data on CPU, GPU and storage devices should be available in procurement systems, inventory management tools, CMMS or some DCIM platforms.
  • Calculate the work capacity of the servers using the PerfCPU values available on The Green Grid website. These values are based on CPU cores by CPU technology generation.
  • Include GPU or accelerator-based compute servers using the 32-bit TFLOPS metrics. An alternative performance metric, such as Total Processing Performance (TPP), may be used if agreed upon later.
  • Include online, dedicated storage equipment (excluding tape) measured in terabytes.
  • Collect data, usually from existing systems, on:
    • Power supplied to CPUs, GPUs and storage systems. This should be relatively straightforward if the appropriate meters and databases are in place. Where there is insufficient metering, it may be necessary to use reasonable allocation methods.
    • Utilization. It is critical for a work per energy metric to know the utilization averages. This data is routinely monitored in all IT systems, but it needs to be collected and normalized for reporting purposes.

With this data, the work per energy metric can be calculated by adding up and averaging the number of transactions per second, then dividing it by the total amount of energy consumed. Like PUE, it is calculated over the course of a year to give an annual average. A simplified version, for three different workloads, is shown in Figure 1.

Figure 1 Examples of IT equipment work-per-energy calculations

Diagram: Examples of IT equipment work-per-energy calculations

Challenges

There are undoubtedly some challenges with this metric. One is that Figure 1 shows three different figures for three different workloads — whereas, in contrast, data centers usually report a single PUE number. This complexity, however, is unavoidable when measuring very different workloads, especially if the figure(s) are to give meaningful guidance on how to make efficiency improvements.

Under its EED reporting scheme, the EU has so far allowed for the inclusion of only one final figure each for server work capacity and storage capacity reporting. While a single figure works for storage, the different performance characteristics of standard CPU servers, AI inference and high-performance compute, and AI training servers make it necessary to report their capacities separately. Uptime argues that combining these three workloads into a single figure — essentially for at-a-glance public consumption — distorts and oversimplifies the report, risking the credibility of the entire effort. Whatever the EU decides, the problem is likely to be the same for any work per energy metric.

A second issue is that 60% of operators lack a complete component and location inventory of their IT infrastructure. Collecting the required information for the installed infrastructure, adjusting purchasing contracts to require inventory data reporting, and automating data collection for new equipment represents a considerable effort, especially at scale. By contrast, a PUE calculation only requires two meters at a minimum. 

However, most of the data collection — and even the calculations — can be automated once the appropriate databases and software are in place. While collecting the initial data and building the necessary systems may take several months, doing so will provide ongoing data to support efficiency improvements. In the case of this metric, data is already available from The Green Grid and Uptime will support the process.

There are several reasons why, until now, no work per energy metric has been successful. Two are particularly noteworthy. First, IT and facilities organizations are often either entirely separate — as in colocation provider/tenant — or they do not generally collaborate or communicate, as is common in enterprise IT. Even when this is not the case, and the data on IT efficiency is available, chief information officers or marketing teams may prefer not to publicize serious inefficiencies. However, such objections will no longer hold sway if legal compliance is required.

The second issue is that industry technical experts have often let the perfect stand in the way of the good, raising concerns about data accuracy. For an effective work per energy metric, the work capacity metric needs to provide a representative, configuration-independent measure that tracks increased work capacity as physical core count increases and as new CPU and GPU generations are introduced.

The Green Grid and Uptime methodologies will no doubt be questioned or opposed by some, but they achieve the intended goal. The work capacity metric does not have to drill down to specific computational workloads or application types, as some industry technologists demand. The argument that there is no reasonable metric, or that it lacks critical support, is no longer grounds for procrastination. IT energy inefficiencies need to be surfaced and understood.

Further information

To access the Uptime report on server and storage capacity (and on work per unit of energy):

Calculating work capacity for server and storage products

To access The Green Grid reports on IT work capacity:

IT work capacity metric V1 — a methodology

Searchable PerfCPU tables by manufacturer

To access an Uptime webinar discussing the metrics discussed in this report:

Calculating Data Center Work Capacity: The EED and Beyond

The post Is this the data center metric for the 2030s? appeared first on Uptime Institute Blog.

The two sides of a sustainability strategy

While much has been written, said and taught about data center sustainability, there is still limited consensus on the definition and scope of an ideal data center sustainability strategy. This lack of clarity has created much confusion, encouraged many operators to pursue strategies with limited results, and enabled some to make claims that are ultimately of little worth.

To date, the data center industry has adopted three broad, complementary approaches to sustainability:

  • Facility and IT sustainability. This approach prioritizes operational efficiency, minimizing the energy, direct carbon and water footprints of IT and facility infrastructure. It directly addresses the operational impacts of individual facilities, reducing material and energy use and costs. Maximizing the sustainability of individual facilities is key to addressing the increased government focus on regulating individual data centers.
  • Ecosystem sustainability. This strategy focuses on carbon neutrality (or carbon negativity), water positivity and nature positivity across the enterprise. Ecosystem sustainability offsets the environmental impacts of an enterprise’s operations, which may increase business costs.
  • Overall sustainability. While some data center operators promote the sustainability of their facilities with limited efforts on ecosystem sustainability, others build their brand around ecosystem sustainability with minimal discussion about the sustainability of their facilities. Although it is common for organizations to make efforts in both areas, it is less common for the strategies to be integrated as a part of a coherent plan.

Each approach has its own benefits and challenges, providing different levels of business and environmental performance improvement. This report is an extension and update to the Sustainability Series of reports, published by Uptime Intelligence in 2022 (see below for a list of the reports), which detailed the seven elements of a sustainability strategy.

Data center sustainability

Data center sustainability involves incorporating sustainability and efficiency considerations into siting, design and operational processes throughout a facility’s life. The organizations responsible for siting and design, IT operations, facility operations, procurement, contracting (colocation and cloud operators) and waste management must embrace the enterprise’s overall sustainability strategy and incorporate it into their daily operations.

Achieving sustainability objectives may require a more costly initial investment for an individual facility, but the reward is likely an overall lower cost of ownership over its life. To implement a sustainability strategy effectively, an operator must address the full range of sustainability elements:

  • Siting and design. Customer and business needs dictate a data center’s location. Typically, multiple sites will satisfy these criteria, however, the location should also be selected based on whether it can help optimize the facility’s sustainability performance. Operators should focus on maximizing free cooling and carbon-free energy consumption while minimizing energy and water consumption. The design should choose equipment and materials that maximize the facility’s environmental performance.
  • Cooling system. The design should minimize water and energy use, including capturing available free-cooling hours. In water-scarce or water-stressed regions, operators should deploy waterless cooling systems. Where feasible and economically viable, heat reuse systems should also be incorporated into the design.
  • Standby power system. The standby power system design should enable fuel flexibility (able to use low-carbon or carbon-free fuels) and provide primary power capability. It should be capable and permitted to deliver primary power for extended periods. This enables the system to support grid reliability and assist in addressing the intermittency of wind and solar generation contracted to supply power to the data center, thereby reducing the carbon intensity of the electricity consumption.
  • IT infrastructure efficiency. IT equipment should be selected to maximize the average work delivered per watt of installed capacity. The installed equipment should run at or close to the highest practical utilization level of the installed workloads while meeting their reliability and resiliency requirements. IT workload placement and management software should be used to monitor and optimize the IT infrastructure performance.
  • Carbonfree energy consumption. Operators should work with electricity utilities, energy retailers, energy developers and regulators to maximize the quantity of clean energy consumed and minimize location-based emissions. Over time, they should plan to increase carbon-free energy consumption to 90% or more of the total consumption. Timelines will vary by region depending on the economics and availability of carbon-free energy.
  • End-of-life equipment reuse and materials recovery. Operators need an end-of-life equipment management process that maximizes the reuse of equipment and components, both within the organization and through refurbishment and use by others. Where equipment must be scrapped, there should be a process in place to recover valuable metals and minerals, as well as energy, through environmentally responsible processes.  
  • Scope 3 emissions management. Operators should require key suppliers to maintain a sustainability strategy, publicly disclose their greenhouse gas (GHG) emissions inventory and reduction goals, and demonstrate progress toward their sustainability objectives. There should be consequences in place for suppliers that fail to show reasonable progress.

While these strategies may appear simple, creating and executing a sustainability strategy requires the commitment of the whole organization — from technicians and engineers to procurement, finance and executive leadership. In some cases, financial criteria may need to shift from considering the initial upfront costs to the total cost of ownership and the revenue benefits/enhancements gained from a demonstrably sustainable operation. A data center sustainability strategy can enhance business and environmental performance.

Ecosystem sustainability

An ecosystem sustainability strategy emphasizes mitigating and offsetting the environmental impacts of an operator’s data center portfolio. While these efforts do not change the environmental operating profile of individual data centers, they are designed to benefit the surrounding community and natural environment. Such projects and environmental offsets are typically managed at the enterprise level rather than the facility level and represent a cost to the enterprise.

  • Carbon-neutral or carbon-negative operations. Operators should purchase energy attribute certificates (EACs) and carbon capture offsets to reduce or eliminate their Scope 1, 2 and 3 emissions inventory. The offsets are generated primarily from facilities geographically separate from the data center facilities. EACs and offsets can be purchased directly from brokers or from operators of carbon-free energy or carbon capture systems.
  • Water-positive operations. Operators should work with communities and conservation groups to implement water recharge and conservation projects that return more water to the ecosystem than is used across their data centers. Examples include wetlands reclamation, water replenishment, support of sustainable agriculture, and leak detection and minimization systems for water distribution networks. These projects can benefit the local watershed or unrelated, geographically distinct watersheds.
  • Nature-positive facilities. The data center or campus should be landscaped to regenerate and integrate with the natural landscape and local ecosystem. Rainwater and stormwater should be naturally filtered and reused where practical. The landscape should be designed and managed to support local flora and fauna, ensuring that the overall campus is seamlessly integrated into the local ecosystem. The overall intent is to make the facility as “invisible” as possible to the local community.
  • Emissions reductions achieved with IT tools. Some operators and data center industry groups quantify and promote the emissions reduction benefits (known as Scope 4 “avoided emissions”) generated from the operation of the IT infrastructure. They assert that the “avoided emissions” achieved through the application of IT systems to increase the operational efficiency of systems or processes, or “dematerialize” products, can offset some or all of the data center infrastructure’s emissions footprint. However, these claims should be approached with caution, as there is a high degree of uncertainty in the calculated quantities of “avoided emissions.”
  • Pro-active work with supply chains. Some operators work directly with supply chain partners to decarbonize their operations. This approach is practical when an enterprise represents a significant percentage of a supplier’s revenue. However, it becomes impractical when an operator’s purchases represent only a small percentage of the supplier’s business.

Ecosystem sustainability seeks to deliver environmental performance improvements to operations and ecosystems outside the operator’s direct control. These improvements compensate for and offset any remaining environmental impacts following the full execution of the data center sustainability strategy. They typically represent a business cost and enhance an operator’s commercial reputation and brand.

Where to focus

Facility and IT and ecosystem sustainability strategies are complementary, addressing the full range of sustainability activities and opportunities. In most organizations, it will be necessary to cover all of these areas, often by different teams focusing on their respective domains.

An operator’s primary focus should be improving the operational efficiency and sustainability performance of its data centers. Investments in the increased use of free cooling, automated control of chiller and IT space cooling systems, and IT consolidation projects can yield significant energy, water and cost savings, along with reductions in GHG emissions. These improvements will not only reduce the environmental footprint of the data center but can also improve its business performance.

These efforts also enable operators to proactively address emerging regulatory and standards frameworks. Such regulations are intended to increase the reporting of operating data and metrics and may ultimately dictate minimum performance standards for data centers.

To reduce the Scope 2 emissions (purchased electricity) associated with data center operations to zero, operators need to work with utilities, energy retailers, and the electricity transmission and distribution system operators. The shared goal is to help build a resilient, interconnected electricity grid populated by carbon-free electricity generation and storage systems — a requirement for government net-zero mandates.

Addressing ecosystem sustainability opportunities is a valuable next step in an operator’s sustainability journey. Ecosystem projects can enhance the natural environment surrounding the data facility, improve the availability of carbon-free energy and water resources locally and globally, and directly support, inform and incentivize the sustainability efforts of customers and suppliers.

Data center sustainability should be approached in two separate ways: first, the infrastructure itself and, second, the ecosystem. Confusion and overlap between these two aspects can lead to unfortunate results. For example, in many cases, a net-zero and water-positive data center program is (wrongly) accepted as an indication that an enterprise is operating a sustainable data center infrastructure.


The Uptime Intelligence View

Operators should prioritize IT and facilities sustainability over ecosystem sustainability. The execution and results of an IT and facilities sustainability strategy directly minimize the environmental footprint of a data center portfolio, while maximizing its business and sustainability performance.

Data reporting and minimum performance standards embodied in enacted or proposed regulations are focused on the operation of the individual data centers, not the aggregated enterprise-level sustainability performance. An operator must demonstrate that they have a highly utilized IT infrastructure (maximized work delivered per unit of energy consumed) and minimized the energy and water consumption and GHG emissions associated with its facility operations.

Pursuing an Ecosystem sustainability strategy is the logical next step for operators that want to do more and further enhance their sustainability credentials. However, an ecosystem sustainability strategy should not be pursued at the expense of an IT and Facilities strategy to shield poor or marginal facility and IT systems performance.

The following Uptime Institute expert was consulted for this report:
Jay Paidipati, Vice President Sustainability Program Management, Uptime Institute

Other related reports published by Uptime Institute include:
Creating a sustainability strategy
IT Efficiency: the critical core of sustainability
Three key elements: water, circularity and siting
Navigating regulations and standards
Tackling greenhouse gases
Reducing the energy footprint

The post The two sides of a sustainability strategy appeared first on Uptime Institute Blog.

❌