Normal view

Received today — 2 April 2026

When Your Data Center Becomes a Liability Overnight

19 March 2026 at 14:00

How Centralized Infrastructure Intelligence Turns Emergency Replacements into Controlled Operations

Most infrastructure professionals spend their careers building for the planned: capacity expansions, technology refreshes, migration cycles that unfold over quarters or years. And then a Monday morning email changes everything.

A government agency bans equipment from a trusted vendor. A threat intelligence report reveals that a state-sponsored actor has been inside your network switches for eighteen months. A manufacturer announces that the platform running your entire campus backbone loses support in nine months. In each case, the same question emerges: how quickly can you identify every affected device across every facility, and how fast can you replace them without breaking what still works?

For a surprising number of organizations, the honest answer is: they don’t know. That gap between confidence in steady-state operations and readiness for unplanned mass replacement is where real risk lives.

The Forces That Turn Infrastructure Upside Down

Emergency hardware replacement at scale is not hypothetical. Recent years have produced real-world triggers across four broad categories, each with distinct operational implications.

Regulatory and geopolitical mandates. The federal effort to remove Chinese-manufactured telecommunications equipment from American networks—driven by the FCC’s Covered List and Section 889 of the National Defense Authorization Act—has forced carriers and federal contractors into wholesale infrastructure replacement on compliance timelines that don’t flex for budget cycles. The FCC has estimated the total program cost at nearly five billion dollars. Any organization touching federal dollars must verify its infrastructure is clean; if it isn’t, replacement is a compliance obligation, not a planning exercise.

Security crises that outpace patching. The Salt Typhoon campaign revealed that Chinese state-sponsored hackers had penetrated multiple major US telecommunications providers, maintaining persistent access for up to two years—exploiting legacy equipment, unpatched router vulnerabilities, and weak credential management. Investigators found routers with patches available for seven years that had never been applied. For affected carriers, the response demanded physical replacement of compromised infrastructure that could no longer be trusted regardless of patch status. When an adversary achieves sufficient persistence, patching becomes insufficient. Replacement is the only reliable remediation.

End-of-life announcements. Vendor lifecycle decisions create quieter but equally urgent pressure. An organization running multiple hardware platforms faces different end-of-support timelines for each, and dependencies between them mean replacing one can cascade into forced changes elsewhere. Without a consolidated view of what is running, where, and when it loses support, these effects are invisible until they cause failures.

Architectural shifts. Zero trust adoption, SASE frameworks, and cloud-delivered security are rendering entire categories of on-premises equipment architecturally obsolete—not because they’ve failed, but because the security model has moved on. The question is not whether legacy VPN appliances and perimeter firewalls will be replaced, but how quickly, and whether the organization has the visibility to execute in a controlled manner.

Why Standard Processes Break Down

Every mature IT organization has IMAC processes: Install, Move, Add, Change. These handle the predictable rhythm of infrastructure life. Emergency replacement programs share almost none of their characteristics.

They are triggered externally. Their scope is massive—hundreds or thousands of devices across multiple sites. They arrive without allocated budgets or pre-positioned inventory, carrying compliance deadlines indifferent to resource constraints.

The organizations that handle these events well recognize them for what they are: standalone programs needing their own governance, funding, and dedicated teams—and their own information infrastructure. That last requirement is where centralized infrastructure management becomes not a convenience but a prerequisite.

What Centralized Infrastructure Intelligence Must Deliver

Four questions—answered immediately.

What is affected, and where is it? When a regulatory notice references a specific manufacturer, or a security advisory identifies a particular hardware model and firmware version, the operations team needs a definitive count within hours, not weeks. Organizations maintaining a continuously updated centralized inventory—capturing hardware models, firmware versions, physical locations, logical roles, and contractual associations—can answer by running a query. Organizations relying on spreadsheets and periodic audits cannot. The difference in response time is typically measured in weeks, and in a compliance-driven scenario, weeks are what you don’t have. Equally important is dependency mapping: understanding that replacing a core switch will affect upstream routers, downstream access switches, and out-of-band management paths. Without it, a replacement that looks straightforward on paper can produce cascading outages in execution.

What is the replacement path? A legacy switch may need to be replaced by different models depending on port density, power constraints, and compatibility with adjacent equipment. Workflow-driven execution ensures every replacement follows the same approval steps, documentation requirements, and validation procedures—preventing errors that compound in programs spanning hundreds of sites.

Where are we right now? Leadership needs a live view of progress—which sites are lagging, where tasks are stalled, which teams are hitting milestones. This enables resource reallocation, timely escalation of procurement bottlenecks, and an auditable record for regulators. It also surfaces patterns previously invisible: a region that consistently runs behind, or an approval step adding days of unnecessary latency.

What did we learn? Emergency replacements are no longer rare—any organization operating at scale should expect one every few years. Those that conduct structured post-project reviews build a compounding advantage: better scoping templates, more accurate resource models, and pre-validated replacement mappings that make the next response faster.

Building Readiness Before the Next Crisis

Emergency replacements cannot be made painless—they are disruptive, expensive, and stressful regardless of preparation. But the difference between an organization that navigates one in three months and one that takes twelve is almost entirely a function of work done before the trigger.

That preparation has three dimensions: information readiness (a continuously updated inventory with hardware identity, location, firmware status, and dependency relationships), process readiness (defined workflow-driven procedures that activate quickly rather than being reinvented under pressure), and organizational readiness (governance, budget authority, and executive sponsorship that allows an emergency program to stand up as a dedicated initiative).

The organizations best positioned for the next regulatory mandate, zero-day disclosure, or end-of-life cascade are investing in that readiness today—not because they know what the trigger will be, but because they’ve built a discipline prepared for all of them.

# # #

About the Author

Oliver Lindner has over 30 years of experience in IT and the management of IT infrastructures with a focus on data centers. He has worked for many years at FNT Software, a leading provider of integrated software solutions for IT management. In his current position as Director of Product Management, he is responsible for the strategic direction and continuous improvement of the software products for data centers. The aim is to support customers in the efficient and transparent design of their IT infrastructure.

Oliver Lindner attaches great importance to customer focus, innovation and quality. His expertise also includes the development and provision of Software as a Service (SaaS) solutions that offer customers maximum flexibility and efficiency. To this end, he works closely with his own team, partners and customers to create sustainable and innovative software solutions.

The post When Your Data Center Becomes a Liability Overnight appeared first on Data Center POST.

The New Demands on Data Center and Storage Leaders

16 March 2026 at 18:00

Looking back on a career in IT, I wanted to reflect on the 20-plus years I spent working in and running data centers for Fortune 500 companies in the New York and New Jersey area. This was an exciting time leading both large and small teams through some of the most complex transformations in IT infrastructure. That included designing a trading floor infrastructure for a major bank that was implemented globally, overseeing the merger of two banks with very different IT backbones, driving a mainframe-to-open-systems modernization effort, managing a data center consolidation, and establishing global IT standards.

Today, the challenges to the job are even more profound than transitioning from mainframes to the Internet, digital, mobile, and cloud world. With the advent of AI and explosive data growth from so many more devices and applications, IT infrastructure leaders must rewrite their stories to keep pace.

After moving to the vendor side several years ago and working as a Senior Solutions Architect at Komprise, I get to work with IT leaders daily.  I see just how much the role of the infrastructure or data center director has changed. Here’s how I see the shift with some tips for IT infrastructure directors and executives to stay relevant in their organizations while navigating these cataclysmic shifts in technology and work.

A Shift Toward Complexity and Constant Adaptation

The job of managing data centers and infrastructure has become more multi-faceted. It is no longer just about uptime and physical infrastructure. Directors are now expected to understand a rapidly expanding universe of technologies. There is increased separation of duties and new responsibilities that did not exist 10 years ago. Add in constant security threats, cloud optimization demands, and the exponential growth of unstructured data which requires ensuring that it is accessible where needed, but in a safe, secure manner and the scope of the role expands fast. And while all of this happens, IT budgets are being squeezed. The mandate remains the same: do more with less.

The Unstructured Data Growth Challenge

A resounding pressure point today is storage and the relentless growth of unstructured data. Recent estimates from IDC show that over 80 percent of enterprise data is unstructured, and that volume is expected to reach 291 zettabytes by 2027.

How do you back it all up in a timely way? How do you replicate it for disaster recovery? How do you ensure protection and accessibility? How do you efficiently prepare it for AI ingestion? It has really come down to understanding that all data is not the same, and you must treat data differently so that you can be efficient in your management of the data. Knowing what data you have, where it lives, and what value it offers is now a core competency for any infrastructure leader.

Hybrid IT and Simplification as a Strategy

Over the past few years, I have seen storage and infrastructure strategies shift significantly. The old model of managing everything the same way is obsolete. My approach has always been to keep environments as simple and basic as possible to reduce unnecessary complexity. In today’s typical hybrid IT landscape, that means using tools that are vendor-agnostic, that work across on-prem, outsourced, and cloud environments, and that give you a single dashboard to make informed decisions.

AI, Cost Cutting, and Evolving Job Roles

There is a lot of noise about AI taking over roles in IT. I do not believe that infrastructure managers, storage engineers, or data center professionals should fear for their jobs. However, relying on the status quo is not a strategy. The one thing that I have seen as a necessity for IT personnel is the ability to adjust and evolve as changes have appeared in the IT arena.

One thing is certain; AI is becoming ingrained across the business, and IT must be able to support it across every function. Nearly 90% of enterprises report regular AI use in at least one business function, compared with 78 percent in 2024, according to 2025 research from McKinsey. Learning how to work with AI, understanding its use cases and business applications, and knowing how to prepare the right data for it are key new skills. Equally important is staying current with cloud technologies and security best practices.

Balancing Cost, Security, and AI Readiness

IT leaders are being asked to walk a tightrope. On one side is the need to control cost and ensure security. On the other side is the drive to make data accessible and ready for AI. Yet these demands are interlinked. Cost control and security are critical to ensure that AI ambitions don’t fail or stall. Without security, AI becomes a liability rather than an advantage. The question facing today’s IT directors is along the lines of: “How do we make data more accessible without increasing risk or cost?” Success will come from integrating these requirements, not prioritizing one at the expense of the other.

Why It Is Still an Exciting Time to Work in IT Infrastructure

There is such a tremendous amount of growth in the amount of data being generated, and data has moved from a support function to a true driver of decisions, products, and strategy. Data is now central to every organization, from predicting outcomes, automating decisions, and personalizing experiences in real-time. Add to the fact that both AI and ML have accentuated the value of data, and there’s a lot of opportunity in this area for people who want to grow their careers and remain in IT infrastructure.

The ability to efficiently and strategically manage data and build the right environment for cost control along with flexibility and innovation is a huge need for the enterprise. In our recent industry survey (link) we found that AI data management is a top desired skillset, and organizations are prioritizing hiring individuals who can confidently lead the AI infrastructure discipline.

What’s Ahead for 2026 and Beyond

Looking ahead, I expect infrastructure directors to move beyond managing infrastructure to leading transformation. This means aligning technology with business strategy in areas such as AI integration, cybersecurity, cost control, and workforce development. AI is moving beyond the hype; it’s becoming increasingly relevant in production workflows. Security will continue to be a priority and will need to be addressed. Lastly, bridging the talent gap and reskilling existing workforces should be a focus.

Five Tips for Adapting as a Modern Infrastructure Leader

  1. Treat data differently
    Stop managing all data the same way. Understand what is valuable, what is redundant, what is creating undue risks, and what needs to be accessible. Prioritize accordingly.
  2. Focus on vendor-agnostic tools
    Choose solutions that work across vendors, technologies and architectures and reduce lock-in. This simplifies operations, reduces cost and delivers better agility.
  3. Invest in learning AI concepts
    You do not need to be a data scientist. But you should understand how AI uses data, and how to prepare infrastructure to support it with proper governance.
  4. Stay current with security developments
    Security threats evolve constantly. Keep up with best practices and build security into every aspect of data and infrastructure management. Partner with the CSO.
  5. Use simplicity as a guiding principle
    Complexity creates risk and inefficiency. Whenever possible, simplify tools, processes, and architectures.


Final Thoughts

The infrastructure director’s role is not what it used to be, and that is a good thing. The scope has grown, the influence has deepened, and the strategic value of IT is clearer than ever. While the challenges are many, so are the opportunities. Those who can adapt, simplify, and lead through change will continue to be essential to their organizations.

# # #

About the Author: 

Paul Romano is a Senior Solutions Architect at Komprise. He has 25 years’ experience at Fortune 100 companies, possessing significant expertise in setting IT direction and policies, data center build outs and migrations, IT architecture, server and endpoint security, penetration testing, establishing productions support standards and guidelines, managing large IT projects and budgets, and integrating new technologies/technology practices into existing environments.

The post The New Demands on Data Center and Storage Leaders appeared first on Data Center POST.

CloudKleyer Frankfurt GmbH Announces Completion of Cross-Border IT Infrastructure Migration

2 March 2026 at 17:00

CloudKleyer Frankfurt GmbH, a German IT service provider, has completed an international project involving the relocation of a client’s server infrastructure from data centers in Stockholm and London to Frankfurt am Main. The project was delivered as a fully managed turnkey solution under the coordination of the CloudKleyer team.

The data center-to-data center (DC-to-DC) migration required the safe transfer of active production equipment while preserving uninterrupted service availability. Acting as the sole responsible contractor, the company supervised all phases of the project — from initial preparation to the controlled commissioning of systems.

Project implementation stages

  1. Structured planning and risk assessment
  2. Professional shutdown of operating equipment
  3. Secure transportation with full insurance coverage
  4. Coordination of access and on-site logistics
  5. Equipment placement (rack & stack), cabling and integration
  6. Testing, verification of operability and controlled launch

Centralized coordination ensured synchronization of technical and logistical activities and reduced risks commonly associated with infrastructure relocation. The project was completed within the scheduled timeframe and complied with established data center standards and security requirements.

Company representatives observe a growing number of infrastructure modernization initiatives across Europe, as businesses relocate workloads to modern facilities to improve reliability, scalability and regulatory compliance.

CloudKleyer intends to further expand its infrastructure transformation services and continue supporting clients in both domestic and international migration projects.

Find more information here.

# # #

About CloudKleyer Frankfurt GmbH

CloudKleyer Frankfurt GmbH is an IT infrastructure provider with more than ten years of experience in the European data center market. The company offers colocation services, IT equipment rental, Remote Hands technical support, high-speed internet connectivity and direct connections to major cloud platforms.

The post CloudKleyer Frankfurt GmbH Announces Completion of Cross-Border IT Infrastructure Migration appeared first on Data Center POST.

Company Profile: VIRTUS on Redefining Data Centre Growth in Europe

9 February 2026 at 17:30

Data Center POST had the opportunity to connect with Christina Mertens, who joined VIRTUS as VP Business Development EMEA in June of 2022. With her she brings over ten years’ experience in developing strategies for, and expanding, existing and new hyperscale infrastructure geographies across EMEA.

For the past decade, she has worked for Amazon in EMEA, where she expanded the existing AWS data centre regions in colocation and self-built facilities, as well as launched new region geographies as the country manager. In her previous role as Data Center Divestiture Principal at Amazon Web Services in EMEA, Christina worked alongside large strategic hyperscale cloud customers, advising them on their infrastructure assets and developing new models to facilitate and enhance their cloud migration journey. She is the Managing Director of Germany and Italy, responsible for overseeing all aspects of the business, including expansions, sales, data centre design, construction and operations.

The information below is summarized to provide our readers a deeper dive into who VIRTUS is, what they do and the problems they are solving in the industry.

What does VIRTUS do?  

VIRTUS is a European data centre provider and the largest in the UK. With over 10 years of experience, whichever sector a business operates in, VIRTUS tailors solutions to specific customer requirements.

What problems does VIRTUS solve in the market?

Businesses have unique workloads, project durations and changing requirements. VIRTUS’ solutions are designed to provide the digital infrastructure which supports these needs. Built to a vast scale, all of our data centres are designed modularly, allowing full flexibility for data centre customers’ requirements. Our facilities operate using 100% renewable energy and are amongst the most efficient facilities in the world.

What are VIRTUS’ core products or services?

We build AI-ready, built to suit and colocation data centres.

VIRTUS’ AI Ready Data Centres are designed to support the high performance computing (HPC) demands of artificial intelligence workloads. Our facilities provide the optimum environment for HPC deployments of any size, including the next generation of AI IT infrastructure and Machine Learning (ML) workloads, which require next generation cooling deployment and increased power per rack.

Our built to suit data centres are those designed specially for the customer. We know that organisations of all sizes need real flexibility, which is why we work with our customers to create bespoke solutions. For example, some require cutting-edge AI solutions which may require space to scale at speed, others might have a hyperscale cloud deployment that needs custom built data halls.

Our colocation service is designed to provide maximum flexibility with individual IT power and space requirements. The modular facilities are designed to scale up with customer growth. This combined with truly flexible commercials allows customers to grow in a cost efficient and unrestrictive environment.

What markets do you serve?

VIRTUS’ European data centres are strategically located in key markets; currently this is London (UK), Berlin (Germany) and Milan (Italy). As part of ST Telemedia Global Data Centres’ (STT GDC) global platform, we have a presence in ten geographies, more than 101 data centres and over 2GW of IT load across 20+ major business markets.

Our vast experience comes from working with many industry sectors – from financial institutions which require ultra-low latency, to thriving tech start ups which rely on contiguous space to grow, and providing entire buildings or campuses for the world’s largest hyperscalers.

What challenges does the global digital infrastructure industry face today?

Many current European data centres simply cannot meet the short- and long-term demands for critical digital infrastructure, often due to a shortage of infrastructure that can support high HPC workloads. It is a fundamental challenge to find land with access to renewable power to build new facilities, quickly and at scale.

For years, development revolved around a handful of key metropolitan hubs. Frankfurt, London, Amsterdam and Paris (collectively known as the FLAP locations) carried much of the continent’s cloud, enterprise and interconnection load, due to their proximity to financial services, global carriers and concentrated digital ecosystems.

Undoubtedly, whilst those hubs continue to grow, their conditions have changed. Power supply is being delayed due to parts of the electricity distribution network not being capable of transporting it, suitable land parcels are becoming scarcer and therefore more expensive to secure, and planning regulations are increasing, lengthening timelines to approvals, if they are granted at all.

Meanwhile, demand for computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, HPC, analytics and modernised public services all require significant and sustained energy and cooling capacity.

McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It is clear that Europe needs more digital infrastructure, but it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is partly why what is sometimes known as the second-tier locations are becoming increasingly more critical to expanding Europe’s digital architecture.

Over the next five years, this is not a marginal shift. Analysts expect Europe’s installed data centre capacity to more than double, from roughly 24 GW in 2025 to around 55 GW by 2030, with secondary markets growing fastest. And, while recent CBRE analysis indicates that in 2025, around 57% of new capacity will still be delivered in the core FLAP-D markets, the remaining 43% will come from secondary locations such as Milan, Madrid and Berlin, many of which are now on track to exceed 100 MW of installed capacity in their own right. This is the context in which tier two locations are moving from “nice to have” to essential if Europe is to keep pace with global demand.

How is VIRTUS adapting to these challenges?

Our strategy is to build new facilities at scale, located close to, but not necessarily in major European metropolitan cities, and supplied with renewable energy.

We are currently building a €3bn 300MW data centre campus development at Wustermark, west of Berlin. Wustermark offers what many central locations cannot – land large enough for a multi-building campus, access to sustainable electricity, proximity to rail and motorway networks, and alignment with Germany’s policy focus on digital capacity. The site is also positioned to benefit from Germany’s wider energy and grid modernisation programmes, including access to renewable energy to power the campus as it is adjacent to Germany’s largest on-shore windfarms capable via a substation and direct coupling, of fulfilling the energy requirements of the facility.

This move towards larger campuses is a calculated strategy that acknowledges the non-linear cost relationship inherent in these types of operations; larger megascale campuses capable of 200-500MWs can often afford providers – and therefore customers – greater efficiencies.

We are also constructing another facility in Italy. Located in Cornaredo, within the Milan West data centre cluster the site will provide ample capacity to support hyperscalers, enterprises and service providers as digital infrastructure demands in Europe continue to grow.

What are VIRTUS’s key differentiators?

What sets VIRTUS apart from our competitors can be found in many aspects of the design, build and operations of our facilities. However, the quality of operations – the Operational Excellence – is where we truly excel. The way we have implemented design innovations makes a difference to the service we provide in terms of efficiency and resilience. It’s how we design, build, test, maintain, change and operate our facilities that differentiates us – ensuring robust and reliable availability is delivered.

What can we expect to see/hear from VIRTUS in the future?  

It’s an exciting time for VIRTUS Europe, but to meet customer demand we’re still increasing our presence as the leader in the UK market, opening two new London data centres in 2026 (LONDON12 and LONDON14) and in the near future a large four data centre campus at Saunderton, whilst continuing our European expansion.

What upcoming industry events will you be attending? 

The VIRTUS team is attending the following events: Platform UK where Adam Eaton will be speaking on a keynote panel, Energy Storage Summit where Helen Kinsman will be speaking on a panel, Compute Summit where Ramzi Charif will be speaking on a panel, and finally Datacloud Energy where Helen Kinsman will be speaking on another panel.

Do you have any recent news you would like us to highlight?

Earlier in 2026 we announced VIRTUS’ new CEO, Adam Eaton. Under his leadership, we will continue to expand our portfolio of high-efficiency, sustainable data centres, building on more than a decade of rapid growth across the UK and Europe. VIRTUS remains committed to its vision to deliver world-class, energy-efficient infrastructure that supports the growth of the digital economy.

Where can our readers learn more about VIRTUS?  

You can learn more about us on our website, www.virtusdatacentres.com.

How can our readers contact VIRTUS? 

You can contact us through the form on our website, www.virtusdatacentres.com/contact-us.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: VIRTUS on Redefining Data Centre Growth in Europe appeared first on Data Center POST.

Received before yesterday

Report: AI Scale Pushing Enterprise Infrastructure toward Failure

NEW YORK, Jan. 29, 2026 — Cockroach Labs, a cloud-agnostic distributed SQL databases with CockroachDB, today announced findings from its second annual survey, “The State of AI Infrastructure 2026: Can Systems Withstand AI Scale?” The report reveals a growing concern that AI use is starting to overwhelm the traditional IT systems meant to support it. As […]

The post Report: AI Scale Pushing Enterprise Infrastructure toward Failure appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

❌