❌

Normal view

Received today β€” 2 April 2026

Scarcity-Native Planning: Operating Models for Constrained Ecosystems

5 March 2026 at 15:00

By Anoop Thulaseedas, Associate Director, Solutions & Consulting at Bristlecone.

Industries across technology, semiconductors, infrastructure, energy, and advanced manufacturing are entering a sustained period of structural scarcity. Explosive growth in AI workloads, electrification, defense modernization, and industrial expansion has outpaced the scaling capacity of upstream ecosystems. In this environment, planning models built on forecast accuracy and assumed supply elasticity are no longer sufficient.

Scarcity is increasingly structural in critical industrial ecosystems.

Competitive advantage now depends on recognizing supply constraints as the governing reality of the enterprise. Capacity availability, not demand projection, determines portfolio sequencing, commercial commitments, capital allocation, and revenue timing.

This shift requires rethinking planning itself. Rather than predicting demand and expecting supply to respond, organizations must deliberately govern constrained capacity across interconnected production and deployment layers.

This paper introduces a scarcity-native operating model in which allocation governance, constraint visibility, and cross-layer orchestration replace forecast-centric optimization. While illustrated through AI infrastructure ecosystems, the underlying logic applies broadly across any multi-constraint industrial environment.

Scarcity as a Multi-Constraint Ecosystem

Modern scarcity rarely originates from a single component or isolated bottleneck. Instead, effective supply is governed by a chain of constraints distributed across interconnected production and deployment layers. These layers span geographies, capital cycles, and technical disciplines β€” from semiconductor fabs to packaging plants, specialty materials facilities, infrastructure sites, and commissioning environments.

Scarcity today is not confined to procurement pipelines or logistics networks. It emerges across an integrated physical system in which each layer possesses independent throughput ceilings, capital intensity, and scaling timelines. Expanding capacity at one node without synchronizing adjacent constraints redistributes bottlenecks downstream.

Scarcity must therefore be understood as a layered physical system rather than an isolated material shortage. While illustrated through semiconductor and data center ecosystems, the same multi-constraint logic applies to energy systems, transportation networks, and advanced manufacturing environments.

The Constraint Chain

Layer 1: Core Component Manufacturing

This layer resides within semiconductor fabrication facilities operated by memory and logic OEMs. It encompasses wafer capacity, yield variability, production allocation decisions, and process throughput limitations. In AI ecosystems, this includes HBM memory production and the output of accelerator silicon. Cleanroom capacity, tool availability, yield ramp maturity and throughput define the production ceiling.

Layer 2: Integration and Advanced Packaging

Following fabrication, components must be integrated into deployable modules within advanced packaging and OSAT facilities. High-precision stacking, bonding technologies, and thermal integration processes convert discrete dies into functional assemblies. Packaging throughput frequently becomes the next gating constraint, independent of wafer supply, due to equipment intensity, cycle-time sensitivity, and specialized labor limitations.

Layer 3: Substrates and Interposers

Specialized substrate and interposer manufacturing is conducted in a limited number of precision materials facilities. These components form the physical interconnect between the compute, memory, and power-delivery layers. Long qualification cycles, limited supplier redundancy, and fine-line manufacturing complexity create structural bottlenecks that often surface only after upstream output expands.

Layer 4: Infrastructure Readiness

Even when integrated hardware is available, deployment depends on the physical site’s readiness. Data center campuses and supporting electrical infrastructure determine installation viability. Rack-level power density, cooling architecture, transformers, switchgear, and grid interconnections govern whether hardware can be activated. These constraints frequently delay monetization despite upstream production success.

Layer 5: Qualification and Commissioning

Final validation, integration testing, and cluster bring-up occur within integration labs and on-site commissioning environments. Skilled engineering capacity, testing infrastructure, and activation throughput determine how quickly deployed assets become operational and revenue-generating.

Orchestrating the Full Constraint System

Optimizing any single layer in isolation produces limited value. Increasing wafer output without packaging capacity, accelerating packaging without infrastructure readiness, or expanding infrastructure without commissioning throughput results in stranded capital and delayed revenue realization.

Value creation depends on synchronized decision-making across the entire constraint chain β€” from fabrication to live deployment.

Scarcity, therefore, represents an enterprise operating model challenge spanning Planning, Sourcing, Engineering, Infrastructure, and Finance. It cannot be managed as a downstream supply execution issue alone.

Core Differentiators of Scarcity-Native Operating Models

Scarcity-native operating models are defined by structural shifts in how planning is conducted and governed.

1. Scarcity-Native Planning vs Traditional Planning

Illustrative Case: Automotive Semiconductor Reallocation

During the 2020–2022 semiconductor shortage, several automotive manufacturers confronted an immediate collapse of forecast-driven production logic. Rather than waiting for supply normalization, some shifted to allocation-driven governance.

Ford Motor Company provides a clear illustration. With chip supply constrained, the company prioritized high-margin vehicles and new product launches over lower-margin configurations. Production schedules were aligned to confirmed semiconductor availability rather than unconstrained dealer forecasts. Non-essential features were temporarily removed from certain models to maximize yield from scarce components.

The result was not merely damage control. By deliberately allocating constrained inputs toward strategic priorities, Ford expanded its order bank and preserved margin performance in the face of systemic supply tightness.

This behavior reflects entitlement-based baselining and value-optimized deployment sequencing β€” core characteristics of scarcity-native operating models.

  • Entitlement-Based Planning Baselines: Planning begins with confirmed supplier allocations, contracted capacity reservations, infrastructure availability, and commissioning throughputβ€”not unconstrained demand forecasts. These entitlements define deployable reality.
  • Allocation-Driven Governance: Explicit allocation logic determines how constrained capacity is distributed across programs, regions, and customers. This replaces reactive firefighting with structured prioritization.
  • Value-Optimized Deployment Sequencing: Deployment decisions prioritize revenue realization, utilization efficiency, strategic commitments, and long-term platform positioning β€” not simply maximizing unit output.
  • Continuous Replanning Cadence: Planning operates dynamically. As supplier commitments shift, packaging schedules move, infrastructure readiness evolves, and commissioning throughput fluctuates, allocation decisions are updated in near real time.

In constrained ecosystems, planning becomes less about predicting demand and more about governing capacity.

2. Internal Competition and Portfolio Trade-Off Management

Scarcity does not only constrain external supply. It creates internal competition for limited deployment capacity.

In infrastructure-intensive environments, multiple initiatives frequently compete for the same constrained resources β€” fabrication allocations, packaging throughput, power envelopes, commissioning capacity, or site readiness. Without centralized governance, these programs generate fragmented demand signals that dilute negotiating leverage, misalign capital sequencing, and create suboptimal capacity utilization.

Scarcity-native organizations formalize portfolio-level prioritization tied explicitly to constrained supply envelopes. Executive trade-off forums align strategic objectives with physical deployment ceilings. Capital investments, infrastructure readiness and customer commitments are sequenced deliberately rather than pursued in parallel under optimistic capacity assumptions.

Allocation decisions are evaluated across explicit dimensions β€” financial impact, reliability, service performance and long-term strategic positioning β€” ensuring scarce capacity is deployed where it creates the highest enterprise value rather than the loudest internal demand.

3. Sourcing Embedded Into Planning Decisions

In constrained ecosystems, sourcing cannot function as a downstream procurement activity. It becomes a structural input into planning itself.

Leading organizations embed supplier allocation commitments, capacity reservation agreements and qualification timelines directly into deployment roadmaps. Confirmed supplier envelopes define planning baselines. Tier-2 and Tier-3 visibility informs risk exposure and contingency design. Power equipment lead times and infrastructure component availability are treated as governing constraints rather than execution afterthoughts.

This integration shifts sourcing from transactional purchasing toward capacity governance. Structured forward visibility and commitment mechanisms provide suppliers with the economic rationale to sustain constrained production capability, reducing volatility amplification across the ecosystem.

When sourcing is embedded into planning, deployable capacity becomes a coordinated outcome rather than a negotiated surprise.

4. Engineering as a Practical Scarcity Lever

While many upstream constraints remain outside direct operational control, engineering decisions materially influence how scarcity is absorbed.

Scarcity-native organizations emphasize platform standardization to reduce component fragmentation and dependency on narrow configurations. Design-for-availability principles favor widely supported architectures. Modular infrastructure design enables flexible sequencing of deployment. Qualification of alternate equipment SKUs and suppliers increases interchangeability where feasible.

These choices do not eliminate structural constraints. They expand optionality within them.

Engineering flexibility reduces concentration risk, improves interchangeability and increases the organization’s ability to realign deployment in response to shifting constraint patterns. In constrained environments, architecture decisions become strategic levers of capacity governance.

Short-Term vs. Medium-Term Scarcity Response

Scarcity response requires distinct behaviors across time horizons. Scarcity-native operating models deliberately differentiate between near-term stabilization of constrained capacity and medium-term expansion of structural optionality.

Short-Term (0–90 Days): Stabilize Utilization

  • Formal allocation governance across competing programs
  • Rapid replanning cycles incorporating real-time supplier signals
  • Prioritization of high-value customers and contracted commitments
  • Cross-functional executive decision forums

The objective is to absorb volatility without cascading disruption.

Medium-Term (3–12 Months): Expand Optionality

  • Supplier diversification and alternate sourcing paths
  • Capacity reservation agreements
  • Accelerated qualification of alternate SKUs and components
  • Platform standardization and modular infrastructure design
  • Multi-constraint scenario modeling

The objective shifts from stabilization to structural resilience within constrained ecosystems.

Operationalizing Scarcity Through S&OP and S&OE

Traditional planning architectures separate Sales & Operations Planning (S&OP) from Sales & Operations Execution (S&OE). Under structural scarcity, this separation breaks down.

Instead, S&OP and S&OE function as a closed-loop control system.

S&OP β€” Policy and Governance

S&OP defines allocation policy:

  • Establishes entitlement baselines
  • Sets guardrails based on confirmed supply envelopes
  • Aligns portfolio priorities with financial and strategic tradeoffs
  • Determines how scarcity is distributed across programs and regions

S&OP governs how limited capacity should be used.

S&OE β€” Dynamic Allocation

S&OE continuously adjusts allocation decisions:

  • Reallocates constrained supply as conditions evolve
  • Adjusts deployment cadence based on supplier commitments and readiness
  • Protects utilization, service levels, and revenue realization

S&OE governs how limited capacity is used today.

Β The Planning Logic Shift

Traditional logic:Β  Plan β†’ Execute

Scarcity-native logic:Β  Policy β†’ Allocate β†’ Learn β†’ Re-Decide

Execution feedback updates governance decisions. Governance decisions reshape execution priorities. Planning becomes a continuous decision cycle rather than a periodic balancing exercise.

This reframes planning from forecast management into enterprise-level capacity governance.

Conclusion: From Forecast Accuracy to Capacity Governance

Structural scarcity is redefining operational excellence. In constrained ecosystems, supply availability, shaped by interconnected physical bottlenecks, determines what can be delivered, when revenue is realized, and where competitive advantage accrues.

Organizations that succeed are not those that eliminate constraints, but those that govern them deliberately. Scarcity-native operating models shift the enterprise mindset from optimization to orchestration: allocating limited capacity where it creates the greatest strategic and financial impact.

Although illustrated through AI infrastructure, the same logic applies across power systems, advanced manufacturing components, transportation capacity, critical materials, and skilled labor markets. Constraint chains are becoming the defining architecture of modern industry.

The transition to scarcity-native operating models requires deliberate organizational design – from governance structures and measurement frameworks to replanning cadences and cross-functional decision rights. Organizations beginning this journey benefit from structured diagnostic assessments that map current constraint visibility, allocation governance maturity, and planning integration gaps against the target operating model.

In a constrained world, performance is no longer determined by how accurately demand is forecasted. It is determined by how effectively access to limited capacity is governed.

The post Scarcity-Native Planning: Operating Models for Constrained Ecosystems appeared first on Data Center POST.

Pre-Connectorized Fiber for 400G/800G and Beyond: Implications for U.S. DCs

10 February 2026 at 19:00

Author: Paulo Campos, President, R&M USA Inc.

U.S. data centers are moving quickly from 100G/200G to 400G and 800G, while preparing for 1.6T. The main driver is AI: training and inference fabrics generate huge east-west (server-to-server) traffic, and any network bottleneck leaves expensive GPUs/accelerators underutilized. Cisco notes that modern AI workloads are β€œdata-intensive” and generate β€œmassive east-west traffic within data centers”.

This step-change is now viable because switching and NIC silicon can deliver much higher bandwidth density. Broadcom’s Tomahawk 5-class devices, for example, support up to 128Γ—400GbE or 64Γ—800GbE in a single chip, enabling higher-radix leaf/spine designs with fewer boxes and links. Optics are improving cost- and power-efficiency as well; a Cisco Live optics session highlights a representative comparison of one 400G module at ~12W versus four 100G modules at ~17W for the same aggregate bandwidth.

In parallel, multi-site β€œmetro cloud” growth is increasing demand for faster data center interconnect (DCI). Coherent pluggables and emerging standards such as OIF 800ZR are making routed IP-over-DWDM architectures more practical for metro DCI.

What this changes

As data centers move to 400G/800G+, the physical layer shifts toward higher-density fiber with tighter loss budgets and stricter operational discipline:

  • Parallel optics increase multi-fiber connectivity. Many short-reach 400G links (e.g., 400GBASE-DR4) use four parallel single-mode fiber pairs with 100G PAM4 per lane, which increases the use of MPO/MTP trunking, polarity management and breakout harnesses/cassettes over simple duplex patching. VSFF connectors (for example MMC/SN-MT) are currently becoming an alternative to familiar MTP/MPO connectivity.
  • PAM4 is less forgiving. Operators typically specify lower-loss components, reduce mated pairs, and enforce more rigorous inspection and cleaning to protect link margin.
  • Single-mode (OS2) expands inside the building. New builds often standardise on OS2 for spine/leaf and any run beyond in-row distances, while copper is largely confined to very short in-rack DACs (with AOCs/AECs or fiber used as lengths increase).
  • DCI emphasizes single-mode duplex LC with coherent optics/DWDM, where fiber quality and minimal patching become critical.

The pre-con solution

Pre-connectorized (pre-terminated) cabling systems – including hardened variants – fit current U.S. requirements for speed, performance and repeatability:

  • Faster deployment and predictable performance: factory-terminated β€œplug-and-play” trunks and panels reduce on-site termination, minimize installer variability, and help teams hit tight loss budgets at 400G/800G and beyond.
  • Higher density and simpler change control: preterm MPO/MTP trunks with modular panels/cassettes pack more fibers into less space and make adds/changes faster with less disruption.
  • Alignment to standards and repeatable architectures: ANSI/TIA-942 defines minimum requirements for data-center infrastructure, while ANSI/BICSI 002-2024 provides widely used best-practice guidance for data-center design and implementation – both encouraging well-defined pathways and modular, repeatable approaches.
  • Resilience for harsh pathways: between buildings, in ducts, and at the edge (modular/outdoor DCs), hardened features such as robust pulling grips and improved protection against water/dirt can reduce rework during construction.

As U.S. data centers push into 400G/800G and prepare for 1.6T, pre-connectorized fiber helps deliver deployment speed, high-density layouts, and repeatable, testable performance – often with less reliance on scarce specialist termination labor.

# # #

References

  1. Cisco. β€œAI Networking in Data Centers.” Cisco website. (Accessed Jan 2026).
  2. Cisco Live 2025. β€œ400G, 800G, and Terabit Pluggable Optics” (BRKOPT-2699).
  3. OIF. β€œImplementation Agreement for 800ZR Coherent Interfaces (OIF-800ZR-01.0).” Oct 8, 2024.
  4. Semiconductor Today. β€œOIF releases 800ZR coherent interface implementation agreement.” Nov 1, 2024.
  5. Ciena. β€œStandards Update: 200GbE, 400GbE and Beyond.” Jan 29, 2018.
  6. TIA. β€œANSI/TIA-942 Standard.” TIA Online.
  7. BICSI. β€œANSI/BICSI 002-2024: The Standard for Data Center Design.” BICSI website.

The post Pre-Connectorized Fiber for 400G/800G and Beyond: Implications for U.S. DCs appeared first on Data Center POST.

Received before yesterday

Data center survey reveals majority believe renewables and BESS are the ideal energy mix, power issues start in 2027

2 February 2026 at 15:26

54% of respondents cited β€œenergy availability and redundancy” as the single greatest obstacle to successful data center development between now and 2030.

From ESS News

aw firm Foley & Lardner LLP released today its 2026 Data Center Development Report, focusing on the growth and challenges in the data center boom that aims to sustain the growth in AI and LLM usage.

A major focus was on energy, with 54% of respondents citing β€œenergy availability and redundancy” as the single greatest obstacle to successful data center development between now and 2030.

Want to learn more about matching renewables with data center demand?

Join us on April 22 for the 3rd SunRise Arabia Clean Energy Conference in Riyadh.

The event will spotlight how solar and energy storage solutions are driving sustainable and reliable infrastructure,Β with a particular focus on powering the country’s rapidly growing data center sector.

In terms of the right energy mix for data centers, 55% of respondents agreeing that the ideal energy mix to meet the growing power demand of data centers is largely renewables (41%), followed by natural gas (17%), nuclear (16%), and BESS (14%).

Nearly half (48%) of industry participants named advances in energy efficiency (which often includes storage optimization) as the greatest opportunity for development through the end of the decade, and nearly three in four respondents (74%) said advanced energy storage systems like batteries, hybrid solutions, and microgrids are the best way to ensure energy resilience.

Only 14% of developers are actually pursuing modular and small modular nuclear reactors as a viable energy opportunity.

Intriguingly, 63% anticipate a β€œstrategic correction” in the market by 2030, driven by the intense competition for power, with one unnamed banking executive in the report saying, β€œOnce power runs out in 2027 or 2028, that’s where we think deal flow will start to slow down.”

105 U.S.-based respondents were qualified to participate in the survey, including those who had direct experience in data center development, energy procurement, technology delivery, or operations within the past 24 months.

Energy analyst firm Wood Mackenzie identified data centers as one of the five trends to look for in 2026 for global energy storage, and within the past week, a battery storage project decided to give up a grid-connection to a data center and re-tool the batteries, earning revenue without being connected.

What they said:

Daniel Farris, partner and co-lead of Foley’s data center and digital infrastructure team: β€œThere is a Gold Rush mentality right now around securing power. That’s a big part of why people feel there’s a bubble,” said β€œThere’s going to a period in the next two to three years where power at necessary levels is going to be really hard to come by.”

Rachel Conrad, senior counsel and co-lead of Foley’s data center and digital infrastructure team: β€œOver the next five to 10 years, power providers will need to either grow capacity or increase efficiency to meet the demand fueled by data centers.”

Solar-plus-storage for data centers: not a simple switch

2 February 2026 at 11:18

Renewables and storage could reliably power data centers, but success requires active grids, coordinated planning, and the right mix of technologies. Hitachi Energy CTO, Gerhard Salge, tells pv magazine that holistic approaches ensure technical feasibility, economic viability, and energy system resilience.

As data centers grow in size and complexity, supplying them with cheap and reliable power has never been more pressing. Gerhard Salge, chief technology officer (CTO) at Hitachi Energy, a unit of Japanese conglomerate Hitachi, shed light on the relationship between renewable energy and data center operations, noting that while technically feasible, success requires careful planning, the right infrastructure, and a holistic approach.

β€œWhen we look at what's happening in the grids, then renewables are an active element on the power generation side, and the data centers are an active element on the demand side,” Salge told pv magazine. β€œWhat you need in addition to that is in the dimensions of flexibility, for which we need storage and a grid that can actively act also here in order to bring all these elements together.”

Want to learn more about matching renewables with data center demand?

Join us on April 22 for the 3rd SunRise Arabia Clean Energy Conference in Riyadh.

The event will spotlight how solar and energy storage solutions are driving sustainable and reliable infrastructure,Β with a particular focus on powering the country’s rapidly growing data center sector.

According to Salge, the key is active grids, not passive systems that simply react to conditions. With more renewables, changing demand patterns, new load centers, and storage options like batteries and existing facilities such as pumped hydro, it is crucial to coordinate these resources actively to maintain supply security, power quality, and cost optimization.

β€œBut when you talk about the impact and the correlation between renewables and data centers, you need always to consider this full scope of the flexibility in a power system of all the elementsβ€”demand side, generation side, storage side, and the active grid in between,” he said, noting that weak or congested grids would not serve this purpose.

AI data centers

Salge warned that not all data centers are the same. β€œThere are conventional data centers and AI data centers,” he said. β€œConventional data centers are essentially high-load systems with some fluctuations on top. They contain many processors handling requestsβ€”from search engines or other applicationsβ€”so the workload is distributed stochastically across them. This creates a baseline load with random ups and downs, which is the typical load pattern of a conventional data center.”

AI workloads, in contrast, rely heavily on GPUs or AI accelerators, which consume significant power continuously. Unlike conventional data centers, AI data centers often run at sustained high load, sometimes close to maximum capacity for long periods.

Htitachi Energy CTO Gerhard Salge

Image: Hitachi Energy

β€œAI data centers are specifically good in doing parallel computing,” Salge explained. β€œSo many of them are triggered with the same demand pattern at the same time, which creates these spikes up and down in the demand profile, and they come in parallel all together.”

These fluctuations challenge both the power supply and the voltage and frequency quality of the connected grid. β€œSo, you need to transport active power from an energy storage system or a supercapacitor to the demand of the AI data center. And that then needs to involve really the control of the data center’s active power. What you need is the interaction between the storage unit and then the AI data center to provide active power or to absorb it afterwards when the peak goes down. That can be also done by a supercapacitor.”

Batteries can store much more energy than supercapacitors, but the latter can ramp smaller energies more frequently. β€œHowever, if you put a battery that is smaller than the load, and you really need to cycle the battery through its full capacity, the battery will not survive very long with your data center, because the frequency of these bursts is so high, then you are aging the battery very, very quickly, yeah, so supercapacitors can do more cycles,” Salge emphasized.

He also noted that batteries and supercapacitors are both mature technologies, but the optimal setupβ€”whether one, the other, or a combination with traditional capacitorsβ€”depends on storage size, number of racks, voltage levels, and overall system design.

Managing AI training bursts

Salge stressed the importance of complying with grid codes across geographies. β€œYou need to become a good citizen to the power system,” he said. β€œYou have to collaborate with local utilities to make sure that you are not infringing the grid codes and you are not disturbing with the data center back into the grid. A good way to do this, when renewables and data centers are co-located, is to manage renewable energy supply already inside the data center territory. Moreover, having a future-fit developed grid is a clear advantage. Because you have much more of these flexibility elements and the active elements to manage storage and renewable integration and to manage the dynamic loads of the data centers.”

If the grid is not future-fit with modern, actively operating equipment, operators will see significantly more stress. β€œWith holistic planning, instead, you can even use some of the data center flexibility as a controllable and demand response kind of feature,” Salge said, adding that data center operators could coordinate AI training bursts to periods when the power system has more available capacity. This makes the data center a predictable, controllable demand, stressing the grid only when it is prepared.

β€œIn conclusion, regarding technical feasibility: yes, it’s possible, but it requires the right configuration,” Salge said.

Economic feasibility

On economics, Salge believes solar and wind remain the cheapest power sources, even when accounting for the grid flexibility needed to integrate them with data centers. Solar is fastest to deploy, wind complements it well, and both can be scaled in parallel.

β€œAny increase in data center demand requires investment, whether from renewables or conventional power. Economics depend on the market, and market mechanisms, regulations, and technical grid planning are interconnected, influencing energy flow, pricing, and system stability,” he said.

β€œWe recommend developers to work with all stakeholdersβ€”utilities, technology providers, and plannersβ€”from the start to ensure reliability, affordability, and social acceptance. Holistic planning avoids reactive fixes and leads to better long-term outcomes,” Salge concluded.

Testing fault at 100 MW battery disrupts Estonia-Finland power link

29 January 2026 at 13:09

During testing at Estonia’s 100 MW Kiisa battery park, both EstLink 1 and EstLink 2 tripped, triggering the most severe disturbance to the regional power grid since desynchronization from the Russian electricity system. As a result, nearly 1 GW of capacity was lost within seconds. The park’s owner has since publicly pointed to the battery manufacturer.

From ESS News

A disturbance in Estonia’s power system on Jan. 20 forced both EstLink interconnections between Estonia and Finland offline, cutting roughly 1,000 MW of capacity, equivalent to about 20% of the Baltic region’s winter electricity load.

The shortfall was initially covered by support from the continental European grid, as the 500 MW AC connection between Poland and Lithuania operated at double its rated capacity to compensate. Later, reserve capacity within the Baltic states was deployed.

The oscillations were triggered by a 100 MW/200 MWh battery energy storage system in Kiisa, just south of Tallinn, one of the largest battery storage systems in the Baltics. The incident occurred during final grid connection testing, which caused the DC cables to trip.

The €100 million facility, developed by Estonian company Evecon in partnership with French firms Corsica Sole and Mirova, features 54 battery containers supplied by Nidec Conversion.

To continue reading, please visit our ESS NewsΒ website.Β 

CMC launches new brand name after container firm merger

27 January 2026 at 20:25



Intermodal equipment and maintenance provider CMC today rebranded the company under its new name, combining three container hardware companies that merged in 2023 with the intent to address a wider market for maintenance and repair (M&R) and storage services for shipping containers.

Charleston, South Carolina-based CMC is the new name for those three firms; Marine Repair Service – Container Maintenance Company (CMC), ITI Intermodal, Inc. (ITI), and Columbia Container Services (CCS).

While the company’s name and visual identity are new, CMC said the organization will continue providing best-in-class maintenance, storage, and repair services for containerized freight across the South, Northeast and Midwest regions.

β€œThis transformation represents the next step in our journey together,” Vince Marino, chief executive officer of CMC, said in a release. β€œOur new name and logo symbolize the strength that comes from the unity of three family-founded companies growing into one cohesive team. CMC stands for our shared commitment to safety, reliability, integrity, and the long-term relationships that define our success.”

Big Blue Poised To Peddle Lots Of On Premises GenAI

29 January 2026 at 05:53

If you want to know the state of the art in GenAI model development, you watch what the Super 8 hyperscalers and cloud builders are doing and you also keep an eye on the major model builders outside of these companies – mainly, OpenAI, Anthropic, and xAI as well as a few players in China like DeepSeek. …

Big Blue Poised To Peddle Lots Of On Premises GenAI was written by Timothy Prickett Morgan at The Next Platform.

Pushed By GenAI And Front End Upgrades, Ethernet Switching Hits New Highs

8 January 2026 at 21:20

But virtue of its scale out capability, which is key for driving the size of absolutely enormous AI clusters, and to its universality, Ethernet switch sales are booming, and if the recent history is any guide, we can expect Ethernet revenues will climb exponentially higher in the coming quarters as well. …

Pushed By GenAI And Front End Upgrades, Ethernet Switching Hits New Highs was written by Timothy Prickett Morgan at The Next Platform.

Real-World Diagnostics and Prognostics for Grid-Connected Battery Energy Storage Systems

12 December 2025 at 15:01


This is a sponsored article brought to you by The University of Sheffield.

Across global electricity networks, the shift to renewable energy has fundamentally changed the behavior of power systems. Decades of engineering assumptions, predictable inertia, dispatchable baseload generation, and slow, well-characterized system dynamics, are now eroding as wind and solar become dominant sources of electricity. Grid operators face increasingly steep ramp events, larger frequency excursions, faster transients, and prolonged periods where fossil generation is minimal or absent.

In this environment, battery energy storage systems (BESS) have emerged as essential tools for maintaining stability. They can respond in milliseconds, deliver precise power control, and operate flexibly across a range of services. But unlike conventional generation, batteries are sensitive to operational history, thermal environment, state of charge window, system architecture, and degradation mechanisms. Their long-term behavior cannot be described by a single model or simple efficiency curve, it is the product of complex electrochemical, thermal, and control interactions.

Most laboratory tests and simulations attempt to capture these effects, but they rarely reproduce the operational irregularities of the grid. Batteries in real markets are exposed to rapid fluctuations in power demand, partial state of charge cycling, fast recovery intervals, high-rate events, and unpredictable disturbances. As Professor Dan Gladwin, who leads Sheffield’s research into grid-connected energy storage, puts it, β€œyou only understand how storage behaves when you expose it to the conditions it actually sees on the grid.”

This disconnect creates a fundamental challenge for the industry: How can we trust degradation models, lifetime predictions, and operational strategies if they have never been validated against genuine grid behavior?

Few research institutions have access to the infrastructure needed to answer that question. The University of Sheffield is one of them.

Rows of battery racks with red connectors in a power storage facility.Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. The University of Sheffield

Sheffield’s unique facility

The Centre for Research into Electrical Energy Storage and Applications (CREESA) operates one of the UK’s only research-led, grid-connected, multi-megawatt battery energy storage testbeds. This environment enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions. As Professor Gladwin notes, β€œwe aim to bridge the gap between controlled laboratory research and the demands of real grid operation.”

At the heart of the facility is an 11 kV, 4 MW network connection that provides the electrical and operational realism required for advanced diagnostics, fault studies, control algorithm development, techno-economic analysis, and lifetime modeling. Unlike microgrid scale demonstrators or isolated laboratory benches, Sheffield’s environment allows energy storage assets to interact with the same disturbances, market signals, and grid dynamics they would experience in commercial deployment.

β€œThe ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.” β€”Professor Dan Gladwin, The University of Sheffield

The facility includes:

  • A 2 MW / 1 MWh lithium titanate system, among the first independent grid-connected BESS of its kind in the UK
  • A 100 kW second-life EV battery platform, enabling research into reuse, repurposing, and circular-economy models
  • Support for flywheel systems, supercapacitors, hybrid architectures, and fuel-cell technologies
  • More than 150 laboratory cell-testing channels, environmental chambers, and impedance spectroscopy equipment
  • High-speed data acquisition and integrated control systems for parameter estimation, thermal analysis, and fault response measurement

The infrastructure allows Sheffield to operate storage assets directly on the live grid, where they respond to real market signals, deliver contracted power services, and experience genuine frequency deviations, voltage events, and operational disturbances. When controlled experiments are required, the same platform can replay historical grid and market signals, enabling repeatable full power testing under conditions that faithfully reflect commercial operation. This combination provides empirical data of a quality and realism rarely available outside utility-scale deployments, allowing researchers to analyse system behavior at millisecond timescales and gather data at a granularity rarely achievable in conventional laboratory environments.

According to Professor Gladwin, β€œthe ability to test at scale, under real operational conditions, is what gives us insights that simulation alone cannot provide.”

Man in a suit stands in a lab with equipment and computer showing graphics.Dan Gladwin, Professor of Electrical and Control Systems Engineering, leads Sheffield’s research into grid-connected energy storage.The University of Sheffield

Setting the benchmark with grid scale demonstration

One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator, a first-of-a-kind system installed at a time when the UK had no established standards for BESS connection, safety, or control. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of the country’s first independent megawatt scale storage platforms.

The project provided deep insight into how high-power battery chemistries behave under grid stressors. Researchers observed sub-second response times and measured the system’s capability to deliver synthetic inertia-like behavior. As Gladwin reflects, β€œthat project showed us just how fast and capable storage could be when properly integrated into the grid.”

But the demonstrator’s long-term value has been its continued operation. Over nearly a decade of research, it has served as a platform for:

  • Hybridization studies, including battery-flywheel control architectures
  • Response time optimization for new grid services
  • Operator training and market integration, exposing control rooms and traders to a live asset
  • Algorithm development, including dispatch controllers, forecasting tools, and prognostic and health management systems
  • Comparative benchmarking, such as evaluation of different lithium-ion chemistries, lead-acid systems, and second-life batteries

A recurring finding is that behavior observed on the live grid often differs significantly from what laboratory tests predict. Subtle electrical, thermal, and balance-of-plant interactions that barely register in controlled experiments can become important at megawatt-scale, especially when systems are exposed to rapid cycling, fluctuating set-points, or tightly coupled control actions. Variations in efficiency, cooling system response, and auxiliary power demand can also amplify these effects under real operating stress. As Professor Gladwin notes, β€œphenomena that never appear in a lab can dominate behavior at megawatt scale.”

These real-world insights feed directly into improved system design. By understanding how efficiency losses, thermal behavior, auxiliary systems, and control interactions emerge at scale, researchers can refine both the assumptions and architecture of future deployments. This closes the loop between application and design, ensuring that new storage systems can be engineered for the operational conditions they will genuinely encounter rather than idealized laboratory expectations.

Ensuring longevity with advanced diagnostics

Battery testing unit with connected cables and a metal duct.Sheffield’s Centre for Research into Electrical Energy Storage and Applications (CREESA) enables researchers to test storage technologies not just in simulation or controlled cycling rigs, but under full-scale, live grid conditions.The University of Sheffield

Ensuring the long-term reliability of storage requires understanding how systems age under the conditions they actually face. Sheffield’s research combines high-resolution laboratory testing with empirical data from full-scale grid-connected assets, building a comprehensive approach to diagnostics and prognostics. In Gladwin’s words, β€œA model is only as good as the data and conditions that shape it. To predict lifetime with confidence, we need laboratory measurements, full-scale testing, and validation under real-world operating conditions working together.”

A major focus is accurate state estimation during highly dynamic operation. Using advanced observers, Kalman filtering, and hybrid physics-ML approaches, the team has developed methods that deliver reliable SOC, SOH and SOP estimates during rapid power swings, irregular cycling, and noisy conditions where traditional methods break down.

Another key contribution is understanding cell-to-cell divergence in large strings. Sheffield’s data shows how imbalance accelerates near SOC extremes, how thermal gradients drive uneven ageing, and how current distribution causes long-term drift. These insights inform balancing strategies that improve usable capacity and safety.

Sheffield has also strengthened lifetime and degradation modeling by incorporating real grid behavior directly into the framework. By analyzing actual market signals, frequency deviations, and dispatch patterns, the team uncovers ageing mechanisms that do not appear during controlled laboratory cycling and would otherwise remain hidden.

These contributions fall into four core areas:

State Estimation and Parameter Identification

  • Robust SOC/SOH estimation
  • Online parameter identification for equivalent circuit models
  • Power capability prediction using transient excitation
  • Data selection strategies under noise and variability

Degradation and Lifetime Modelling

  • Degradation models built on real frequency and market data
  • Analysis of micro cycling and asymmetric duty cycles
  • Hybrid physics-ML forecasting models

Thermal and Imbalance Behavior

  • Characterizing thermal gradients in containerized systems
  • Understanding cell imbalance in large-scale systems
  • Mitigation strategies at the cell and module level
  • Coupled thermal-electrical behavior under fast cycling

Hybrid Systems and Multi-Technology Optimization

  • Battery-flywheel coordination strategies
  • Techno-economic modeling for hybrid assets
  • Dispatch optimization using evolutionary algorithms
  • Control schemes that extend lifetime and enhance service performance

Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. These batteries face deep cycling, variable user behavior, and sustained high temperatures, all without active cooling or controlled environments. The team’s techniques in cell characterization, parameter estimation, and in-situ health tracking have helped extend the usable life of MOPO’s battery packs. β€œBy applying our know-how, we can make these battery-swap packs clean, safe, and significantly more affordable than petrol and diesel generators for the communities that rely on them,” says Professor Gladwin.

Beyond grid-connected systems, Sheffield’s diagnostic methods have also proved valuable in off-grid environments. A key example is the collaboration with MOPO, a company deploying pay-per-swap lithium-ion battery packs in low-income communities across Sub-Saharan Africa. MOPO

Collaboration and the global future

A defining strength of Sheffield’s approach is its close integration with industry, system operators, technology developers, and service providers. Over the past decade, its grid-connected testbed has enabled organizations to trial control algorithms, commission their first battery assets, test market participation strategies, and validate performance under real operational constraints.

These partnerships have produced practical engineering outcomes, including improved dispatch strategies, refined control architectures, validated installation and commissioning methods, and a clearer understanding of degradation under real-world market operation. According to Gladwin, β€œIt is a two-way relationship, we bring the analytical and research tools, industry brings the operational context and scale.”

A man in high-visibility jacket stands by a blue and white shipping container.One of Sheffield’s earliest breakthroughs came with the installation of a 2 MW / 1 MWh lithium titanate demonstrator. Professor Gladwin led the engineering, design, installation, and commissioning of the system, establishing one of UK’s first independent megawatt scale storage platforms.The University of Sheffield

This two-way exchange, combining academic insight with operational experience, ensures that Sheffield’s research remains directly relevant to modern power systems. It continues to shape best practice in lifetime modelling, hybrid system control, diagnostics, and operational optimization.

As electricity systems worldwide move toward net zero, the need for validated models, proven control algorithms, and empirical understanding will only grow. Sheffield’s combination of full-scale infrastructure, long-term datasets, and collaborative research culture ensures it will remain at the forefront of developing storage technologies that perform reliably in the environments that matter most, the real world.

Polar Racking secures first Australian contract with 240MW Maryvale solar-plus-storage site

16 January 2026 at 02:21
Canada-based solar mounting systems provider Polar Racking has entered the Australian market through its involvement in the 240MW Maryvale solar-plus-storage project in New South Wales, marking the company's first project deployment in the country.

Why Compliance to Harmonic Studies Is Now Mandatory for Modern Power Systems

Modern power systems are undergoing fundamental transformation. The rapid growth of inverter-based resourcesβ€”such as battery energy storage systems (BESS), solar PV plants, wind farms, HVDC links, electric vehicle chargers, and power-electronic-dominated industrial loadsβ€”has significantly altered the harmonic behavior of electricity... Read more

The post Why Compliance to Harmonic Studies Is Now Mandatory for Modern Power Systems appeared first on EEP - Electrical Engineering Portal.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45Β°C (113Β°F) and 65Β°C (149Β°F), sometimes reaching 100Β°C (212Β°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80Β°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

❌