Normal view

Received today — 2 April 2026

Data Governance and Clinical Innovation

31 March 2026 at 13:00

Artificial intelligence is a tool designed to power innovation, but it’s important to understand its primary fuel: data. Data is required not only for the outputs of AI algorithms but also for their training and operation. Because of this, in sectors where innovation has become driven by technologies like artificial intelligence, data has essentially become fuel for innovation, and it’s important to ensure the safety and quality of this data to stimulate it.

Understandably, many critics have expressed concern over the use of artificial intelligence in healthcare settings, considering the private, sensitive nature of the data used in the field. Patient personal information is not only highly sensitive but also protected by law, meaning there are strict regulations and guidelines dictating how entities in healthcare can use artificial intelligence with regard to patient data.

Why strong data governance is essential for AI in healthcare

However, that doesn’t mean artificial intelligence shouldn’t be used in healthcare whatsoever. Instead, it means there is a need for strong data governance, as this is an essential step in enabling safe and ethical AI use in any industry, particularly ones such as healthcare where the stakes are high. In addition to ensuring compliance with any applicable regulations, strong data governance helps create greater transparency and trust that inspires patient confidence.

It’s important to remember the reason why the healthcare sector wants to deploy artificial intelligence technology in the first place: AI can accelerate innovation and lead to improved patient outcomes. For example, innovators in the healthcare industry have used AI to accelerate drug discovery, conduct more accurate diagnostics, and streamline operations in a way that significantly improves efficiency. But to achieve these outcomes, systems must have access to accurate, well-managed data.

The key to this is creating compliance frameworks that reduce and mitigate the risks of artificial intelligence while still supporting scalable healthcare solutions. Of course, the core of any compliance framework in healthcare is data security and privacy, but these guidelines can also help control other risks, such as algorithmic bias and “black box” risks, ensuring that all decisions and recommendations made by an artificial intelligence are fair and explainable.

Enabling the responsible deployment of AI in healthcare

Ultimately, data governance isn’t about gatekeeping but about collaboration and enabling the responsible and ethical deployment of artificial intelligence. The mindset with which we approach AI shouldn’t be about limiting how we can use the technology, but instead how we can facilitate its use in a way that does not compromise data integrity or patient privacy.

Right now, the key goal of healthcare practitioners who hope to implement artificial intelligence should be to build trust and reliability in these systems. The steps required to achieve this include ensuring data quality and diversity, maintaining transparent communication, and continuous monitoring and validation.

The best way to look at AI systems in healthcare is as an analog to human employees. In healthcare, not even human employees have unfettered access to patient data. There are access controls based on the level of access an individual needs, with checks and balances and supervisory control.

The same philosophy should apply to autonomous systems. Just as approvals and access controls are required of human employees, so too should AI systems require approvals from human overseers.

Indeed, there is a world in which artificial intelligence can revolutionize the healthcare industry for the better, alleviating some of the burden on healthcare workers and contributing to improved patient outcomes. However, for this to happen, the adoption of AI must be done in a way that is responsible and ethical. With this mindset, prioritizing strong data governance, AI can become a reliable partner in patient care.

# # #

About the Author

Chris Hutchins serves as the founder and CEO of Hutchins Data Strategy Consulting. The healthcare institutions benefit from his expertise in developing scalable moral data and artificial intelligence methods to maximize their data potential. His areas of expertise include enterprise data governance, responsible AI adoption, and self-service analytics. His expertise helps organizations achieve substantial results through technology implementation. Through team empowerment, Chris assists healthcare leaders in enhancing care delivery while reducing administrative work and transforming data into meaningful outcomes.

The post Data Governance and Clinical Innovation appeared first on Data Center POST.

When Your Data Center Becomes a Liability Overnight

19 March 2026 at 14:00

How Centralized Infrastructure Intelligence Turns Emergency Replacements into Controlled Operations

Most infrastructure professionals spend their careers building for the planned: capacity expansions, technology refreshes, migration cycles that unfold over quarters or years. And then a Monday morning email changes everything.

A government agency bans equipment from a trusted vendor. A threat intelligence report reveals that a state-sponsored actor has been inside your network switches for eighteen months. A manufacturer announces that the platform running your entire campus backbone loses support in nine months. In each case, the same question emerges: how quickly can you identify every affected device across every facility, and how fast can you replace them without breaking what still works?

For a surprising number of organizations, the honest answer is: they don’t know. That gap between confidence in steady-state operations and readiness for unplanned mass replacement is where real risk lives.

The Forces That Turn Infrastructure Upside Down

Emergency hardware replacement at scale is not hypothetical. Recent years have produced real-world triggers across four broad categories, each with distinct operational implications.

Regulatory and geopolitical mandates. The federal effort to remove Chinese-manufactured telecommunications equipment from American networks—driven by the FCC’s Covered List and Section 889 of the National Defense Authorization Act—has forced carriers and federal contractors into wholesale infrastructure replacement on compliance timelines that don’t flex for budget cycles. The FCC has estimated the total program cost at nearly five billion dollars. Any organization touching federal dollars must verify its infrastructure is clean; if it isn’t, replacement is a compliance obligation, not a planning exercise.

Security crises that outpace patching. The Salt Typhoon campaign revealed that Chinese state-sponsored hackers had penetrated multiple major US telecommunications providers, maintaining persistent access for up to two years—exploiting legacy equipment, unpatched router vulnerabilities, and weak credential management. Investigators found routers with patches available for seven years that had never been applied. For affected carriers, the response demanded physical replacement of compromised infrastructure that could no longer be trusted regardless of patch status. When an adversary achieves sufficient persistence, patching becomes insufficient. Replacement is the only reliable remediation.

End-of-life announcements. Vendor lifecycle decisions create quieter but equally urgent pressure. An organization running multiple hardware platforms faces different end-of-support timelines for each, and dependencies between them mean replacing one can cascade into forced changes elsewhere. Without a consolidated view of what is running, where, and when it loses support, these effects are invisible until they cause failures.

Architectural shifts. Zero trust adoption, SASE frameworks, and cloud-delivered security are rendering entire categories of on-premises equipment architecturally obsolete—not because they’ve failed, but because the security model has moved on. The question is not whether legacy VPN appliances and perimeter firewalls will be replaced, but how quickly, and whether the organization has the visibility to execute in a controlled manner.

Why Standard Processes Break Down

Every mature IT organization has IMAC processes: Install, Move, Add, Change. These handle the predictable rhythm of infrastructure life. Emergency replacement programs share almost none of their characteristics.

They are triggered externally. Their scope is massive—hundreds or thousands of devices across multiple sites. They arrive without allocated budgets or pre-positioned inventory, carrying compliance deadlines indifferent to resource constraints.

The organizations that handle these events well recognize them for what they are: standalone programs needing their own governance, funding, and dedicated teams—and their own information infrastructure. That last requirement is where centralized infrastructure management becomes not a convenience but a prerequisite.

What Centralized Infrastructure Intelligence Must Deliver

Four questions—answered immediately.

What is affected, and where is it? When a regulatory notice references a specific manufacturer, or a security advisory identifies a particular hardware model and firmware version, the operations team needs a definitive count within hours, not weeks. Organizations maintaining a continuously updated centralized inventory—capturing hardware models, firmware versions, physical locations, logical roles, and contractual associations—can answer by running a query. Organizations relying on spreadsheets and periodic audits cannot. The difference in response time is typically measured in weeks, and in a compliance-driven scenario, weeks are what you don’t have. Equally important is dependency mapping: understanding that replacing a core switch will affect upstream routers, downstream access switches, and out-of-band management paths. Without it, a replacement that looks straightforward on paper can produce cascading outages in execution.

What is the replacement path? A legacy switch may need to be replaced by different models depending on port density, power constraints, and compatibility with adjacent equipment. Workflow-driven execution ensures every replacement follows the same approval steps, documentation requirements, and validation procedures—preventing errors that compound in programs spanning hundreds of sites.

Where are we right now? Leadership needs a live view of progress—which sites are lagging, where tasks are stalled, which teams are hitting milestones. This enables resource reallocation, timely escalation of procurement bottlenecks, and an auditable record for regulators. It also surfaces patterns previously invisible: a region that consistently runs behind, or an approval step adding days of unnecessary latency.

What did we learn? Emergency replacements are no longer rare—any organization operating at scale should expect one every few years. Those that conduct structured post-project reviews build a compounding advantage: better scoping templates, more accurate resource models, and pre-validated replacement mappings that make the next response faster.

Building Readiness Before the Next Crisis

Emergency replacements cannot be made painless—they are disruptive, expensive, and stressful regardless of preparation. But the difference between an organization that navigates one in three months and one that takes twelve is almost entirely a function of work done before the trigger.

That preparation has three dimensions: information readiness (a continuously updated inventory with hardware identity, location, firmware status, and dependency relationships), process readiness (defined workflow-driven procedures that activate quickly rather than being reinvented under pressure), and organizational readiness (governance, budget authority, and executive sponsorship that allows an emergency program to stand up as a dedicated initiative).

The organizations best positioned for the next regulatory mandate, zero-day disclosure, or end-of-life cascade are investing in that readiness today—not because they know what the trigger will be, but because they’ve built a discipline prepared for all of them.

# # #

About the Author

Oliver Lindner has over 30 years of experience in IT and the management of IT infrastructures with a focus on data centers. He has worked for many years at FNT Software, a leading provider of integrated software solutions for IT management. In his current position as Director of Product Management, he is responsible for the strategic direction and continuous improvement of the software products for data centers. The aim is to support customers in the efficient and transparent design of their IT infrastructure.

Oliver Lindner attaches great importance to customer focus, innovation and quality. His expertise also includes the development and provision of Software as a Service (SaaS) solutions that offer customers maximum flexibility and efficiency. To this end, he works closely with his own team, partners and customers to create sustainable and innovative software solutions.

The post When Your Data Center Becomes a Liability Overnight appeared first on Data Center POST.

The New Demands on Data Center and Storage Leaders

16 March 2026 at 18:00

Looking back on a career in IT, I wanted to reflect on the 20-plus years I spent working in and running data centers for Fortune 500 companies in the New York and New Jersey area. This was an exciting time leading both large and small teams through some of the most complex transformations in IT infrastructure. That included designing a trading floor infrastructure for a major bank that was implemented globally, overseeing the merger of two banks with very different IT backbones, driving a mainframe-to-open-systems modernization effort, managing a data center consolidation, and establishing global IT standards.

Today, the challenges to the job are even more profound than transitioning from mainframes to the Internet, digital, mobile, and cloud world. With the advent of AI and explosive data growth from so many more devices and applications, IT infrastructure leaders must rewrite their stories to keep pace.

After moving to the vendor side several years ago and working as a Senior Solutions Architect at Komprise, I get to work with IT leaders daily.  I see just how much the role of the infrastructure or data center director has changed. Here’s how I see the shift with some tips for IT infrastructure directors and executives to stay relevant in their organizations while navigating these cataclysmic shifts in technology and work.

A Shift Toward Complexity and Constant Adaptation

The job of managing data centers and infrastructure has become more multi-faceted. It is no longer just about uptime and physical infrastructure. Directors are now expected to understand a rapidly expanding universe of technologies. There is increased separation of duties and new responsibilities that did not exist 10 years ago. Add in constant security threats, cloud optimization demands, and the exponential growth of unstructured data which requires ensuring that it is accessible where needed, but in a safe, secure manner and the scope of the role expands fast. And while all of this happens, IT budgets are being squeezed. The mandate remains the same: do more with less.

The Unstructured Data Growth Challenge

A resounding pressure point today is storage and the relentless growth of unstructured data. Recent estimates from IDC show that over 80 percent of enterprise data is unstructured, and that volume is expected to reach 291 zettabytes by 2027.

How do you back it all up in a timely way? How do you replicate it for disaster recovery? How do you ensure protection and accessibility? How do you efficiently prepare it for AI ingestion? It has really come down to understanding that all data is not the same, and you must treat data differently so that you can be efficient in your management of the data. Knowing what data you have, where it lives, and what value it offers is now a core competency for any infrastructure leader.

Hybrid IT and Simplification as a Strategy

Over the past few years, I have seen storage and infrastructure strategies shift significantly. The old model of managing everything the same way is obsolete. My approach has always been to keep environments as simple and basic as possible to reduce unnecessary complexity. In today’s typical hybrid IT landscape, that means using tools that are vendor-agnostic, that work across on-prem, outsourced, and cloud environments, and that give you a single dashboard to make informed decisions.

AI, Cost Cutting, and Evolving Job Roles

There is a lot of noise about AI taking over roles in IT. I do not believe that infrastructure managers, storage engineers, or data center professionals should fear for their jobs. However, relying on the status quo is not a strategy. The one thing that I have seen as a necessity for IT personnel is the ability to adjust and evolve as changes have appeared in the IT arena.

One thing is certain; AI is becoming ingrained across the business, and IT must be able to support it across every function. Nearly 90% of enterprises report regular AI use in at least one business function, compared with 78 percent in 2024, according to 2025 research from McKinsey. Learning how to work with AI, understanding its use cases and business applications, and knowing how to prepare the right data for it are key new skills. Equally important is staying current with cloud technologies and security best practices.

Balancing Cost, Security, and AI Readiness

IT leaders are being asked to walk a tightrope. On one side is the need to control cost and ensure security. On the other side is the drive to make data accessible and ready for AI. Yet these demands are interlinked. Cost control and security are critical to ensure that AI ambitions don’t fail or stall. Without security, AI becomes a liability rather than an advantage. The question facing today’s IT directors is along the lines of: “How do we make data more accessible without increasing risk or cost?” Success will come from integrating these requirements, not prioritizing one at the expense of the other.

Why It Is Still an Exciting Time to Work in IT Infrastructure

There is such a tremendous amount of growth in the amount of data being generated, and data has moved from a support function to a true driver of decisions, products, and strategy. Data is now central to every organization, from predicting outcomes, automating decisions, and personalizing experiences in real-time. Add to the fact that both AI and ML have accentuated the value of data, and there’s a lot of opportunity in this area for people who want to grow their careers and remain in IT infrastructure.

The ability to efficiently and strategically manage data and build the right environment for cost control along with flexibility and innovation is a huge need for the enterprise. In our recent industry survey (link) we found that AI data management is a top desired skillset, and organizations are prioritizing hiring individuals who can confidently lead the AI infrastructure discipline.

What’s Ahead for 2026 and Beyond

Looking ahead, I expect infrastructure directors to move beyond managing infrastructure to leading transformation. This means aligning technology with business strategy in areas such as AI integration, cybersecurity, cost control, and workforce development. AI is moving beyond the hype; it’s becoming increasingly relevant in production workflows. Security will continue to be a priority and will need to be addressed. Lastly, bridging the talent gap and reskilling existing workforces should be a focus.

Five Tips for Adapting as a Modern Infrastructure Leader

  1. Treat data differently
    Stop managing all data the same way. Understand what is valuable, what is redundant, what is creating undue risks, and what needs to be accessible. Prioritize accordingly.
  2. Focus on vendor-agnostic tools
    Choose solutions that work across vendors, technologies and architectures and reduce lock-in. This simplifies operations, reduces cost and delivers better agility.
  3. Invest in learning AI concepts
    You do not need to be a data scientist. But you should understand how AI uses data, and how to prepare infrastructure to support it with proper governance.
  4. Stay current with security developments
    Security threats evolve constantly. Keep up with best practices and build security into every aspect of data and infrastructure management. Partner with the CSO.
  5. Use simplicity as a guiding principle
    Complexity creates risk and inefficiency. Whenever possible, simplify tools, processes, and architectures.


Final Thoughts

The infrastructure director’s role is not what it used to be, and that is a good thing. The scope has grown, the influence has deepened, and the strategic value of IT is clearer than ever. While the challenges are many, so are the opportunities. Those who can adapt, simplify, and lead through change will continue to be essential to their organizations.

# # #

About the Author: 

Paul Romano is a Senior Solutions Architect at Komprise. He has 25 years’ experience at Fortune 100 companies, possessing significant expertise in setting IT direction and policies, data center build outs and migrations, IT architecture, server and endpoint security, penetration testing, establishing productions support standards and guidelines, managing large IT projects and budgets, and integrating new technologies/technology practices into existing environments.

The post The New Demands on Data Center and Storage Leaders appeared first on Data Center POST.

Desperate to Fund AI? Leasing May Be the Smartest Move IT Leaders Make in 2026

9 March 2026 at 14:00

AI spending is accelerating at a pace most enterprise budgets simply can’t match. While IT leaders are under pressure to deliver transformative AI capabilities, their capital budgets aren’t growing at the same rate as these AI ambitions. This mismatch is forcing difficult trade-offs: delayed projects, stretching aging infrastructure beyond its intended lifecycle, and diverting funding from other critical initiatives.

But there is another option. Increasingly, IT leaders are turning to technology leasing as a savvy strategy to help expedite AI adoption without sacrificing operational agility or financial liquidity.

AI: Thinking Through the Dollars and Sense

From my vantage point, working closely with IT leaders across industries, I hear the lament. AI infrastructure is expensive and highly concentrated, particularly GPU-based compute power. A single GPU cluster designed to support large-scale AI workloads can cost hundreds of thousands to millions. For enterprise-wide deployments, total data center investments can easily reach $150 million and as much as $500 million.

For mid-tier enterprises, challenges are even greater, as many lack the balance-sheet strength to secure traditional credit for such large capital expenditures. Some resort to private equity or high-interest lenders. But even those who can afford to purchase the infrastructure outright are frustrated by the pace of AI innovation; and the risk of technology becoming quickly outdated or obsolete.

For determined IT leaders, the question is not whether to invest in AI infrastructure, but how to fund it without compromising the broader IT roadmap. This is where the financing strategy becomes just as important as the technology strategy.

IT leasing eases these pressures in several critical ways:

  • Minimizing upfront costs. Traditional purchasing requires a massive outlay of capital, sometimes forcing companies to scale back or winnow down the scope of projects despite urgent demand. Leasing converts that one-time expense into predictable monthly payments. Instead of committing $50 million upfront, an organization can structure payments over time, freeing capital for additional initiatives and allowing multiple AI projects to move forward simultaneously.
  • Enhancing flexibility and reducing financial risk. Purchased technology sits on the balance sheet and depreciates over a fixed period. If business needs shift or the organization upgrades early, it can trigger book losses. Leasing – when structured properly – can classify equipment as an operating expense, keeping it off the balance sheet and enabling companies to pivot more easily without the burden of carrying these assets.

Lease the Entire AI Stack, Not Just the Hardware

IT leaders recognize today’s AI deployments extend far beyond servers. Enterprises are leasing high-performance GPU servers optimized for AI model training and inference, along with high-speed networking equipment, enterprise storage systems, integrated “rack and roll” data center solutions, firewalls, and AI-specific software.

Maintenance contracts, security tools, and embedded applications can all be incorporated into a single lease structure.

This bundling delivers administrative and compliance benefits. Hardware typically carries a residual value often 10–15% below purchase cost, amortized across the lease term. Software licenses and other “soft costs” are included in payments and expire at term end, eliminating resale complications. Clients are responsible only for the hardware at lease completion, simplifying compliance and ensuring security updates, patches, and licenses remain current throughout the lifecycle.

Combat Obsolescence Before It Becomes a Liability

One of the most common concerns I hear from executives is technology obsolescence. And given the pace of AI, where innovation cycles are measured in months, not years, that concern is justified.

Leasing naturally enforces a rigor and discipline for countering obsolescence. A three- or four-year term creates a defined decision point: extend, buy out or upgrade the technology. This prevents the “set it and forget it” ownership mindset that often leads to aging, unsupported systems and expensive, reactive refresh cycles. In AI environments, delaying upgrades can multiply total costs through inefficiencies and lost competitive advantage.

Leasing is a Budget Multiplier

Looking ahead to 2026 and beyond, IT leaders must think differently about capital allocation. No one can predict what the AI landscape will look like in three years. Owning large volumes of rapidly depreciating infrastructure can limit strategic agility.

Leaders must also factor in the full lifecycle cost of AI infrastructure, which includes equipment refreshes, secure data wiping, asset disposition, and regulatory compliance. These factors carry operational and financial burdens when assets are owned outright.

The most important priority today is building a strategy that enables AI adoption with minimal upfront cost and maximum flexibility. Leasing can act as a budget multiplier. Instead of exhausting capital on one large acquisition, organizations can deploy that same funding across predictable monthly payments, preserving liquidity while expanding total project capacity. In doing so, IT leaders maintain momentum across their complete technology roadmap, ensuring AI transformation doesn’t come at the expense of operational resilience.

# # #

About the Author

Frank Sommers brings 30 years of experience in the IT leasing industry, working closely with global enterprise organizations to help them modernize infrastructure while preserving capital and accelerating technology adoption. Known for consistently exceeding sales targets, Frank has also developed and led numerous successful vendor financing programs in partnership with major resellers, creating flexible acquisition models that support complex IT environments. His deep expertise in IT lifecycle management, financing strategies, and enterprise procurement has made him a trusted advisor across the industry. A former collegiate soccer player at Cal Poly San Luis Obispo, Frank brings the same competitiveness and teamwork to every client relationship.

The post Desperate to Fund AI? Leasing May Be the Smartest Move IT Leaders Make in 2026 appeared first on Data Center POST.

Received before yesterday

Three steps to better picking

2 February 2026 at 14:00



The labor-intensive job of picking is one of the most critical roles in the warehouse in an era when companies live and die by their ability to get orders out the door quickly and accurately. This is true in B-to-B (business-to-business) environments as well as the increasingly demanding consumer market, where lightning-fast shipping is the norm.

But picking is a tough position to fill these days, according to recent industry studies on the state of warehouse labor. Aside from a general difficulty in finding warehouse help, hiring pick and pack workers was cited as the most difficult recruiting challenge in the industry by business leaders surveyed for the “2025 State of Warehouse Labor Report” from staffing company Instawork.

“Warehouse operators continue to highlight that finding and retaining hourly workers is a top concern,” the authors wrote in the October 2025 report. “Among the most difficult roles to staff are pick/pack workers, forklift operators, and shift leads. Most respondents reported turnover rates less than 10%; however a notable portion of survey respondents cited turnover rates between 10% and 25%, further underscoring the instability many facilities face in maintaining a reliable labor pool.”


More than 20% of the survey’s respondents listed picking as a challenging role to fill, compared to 16% who cited shift leads and 14% who cited forklift operators as hard to find (see Exhibit 1). Amid that pressure, many warehouses are seeking ways to make picking easier and workers more productive—all while maintaining accurate fulfillment metrics and a stable workforce. Here are three steps companies can take to meet those challenges head on.

1. EASE THE PHYSICAL DEMANDS OF THE JOB

David Barker, of supply chain technology provider Honeywell, says picking represents a specific set of challenges that make the job both critical to operations and hard to fill. First, picking is a high-cost area: More than half (55%) of a warehouse’s total operating costs can be attributed to picking, Barker says, citing Georgia Institute of Technology data. This puts a microscope on the picking function, which must also be efficient and precise. At the same time, the physical demands of the job can add up: Pickers often walk miles in a single shift, pushing heavy carts in an environment that can be “hot when it’s hot, and cold when it’s cold,” explains Barker, who is president of Honeywell PSS, the company’s productivity solutions and services business. Repetitive stress from physically reaching, lifting, and twisting to select items can take a toll as well.

Combined, these factors make the picking function ripe for intervention.

“[There is a] spectrum of skills required in the warehouse—and [picking] is a skilled operation,” Barker explains, referring to the precision required of the job and the difficulty of getting replacement staff up to speed in high-turnover situations. “It’s not easy to replace someone. Ramp-up time is required.

“This is an area where there is plenty of opportunity for improvement.”

Barker’s colleague Matt Sterner agrees and points to accelerating fulfillment and delivery demands as an added burden.

“With the continued growth in e-commerce, that continues to drive the need for faster throughput in the warehouse—and picking gets the most attention,” says Sterner, who is global customer marketing leader for transportation, logistics, and warehousing at Honeywell. “You have to get product picked and to the customer as [quickly as possible].”

Warehouse leaders can alleviate some of the physical stress on workers and boost productivity by optimizing facility layout and automating the picking process. A disorganized warehouse can cause excessive travel time, for one thing, so the first step is to analyze your layout to ensure a smooth flow throughout the building.

As for what that might entail, material handling equipment maker BHS Inc. recommends the following steps:
  • Ensure a logical flow from receiving to storage, then to picking, packing, and shipping areas to minimize backtracking.
  • Implement an ABC analysis, in which high-velocity “A” items are stored in the most accessible locations, closest to packing and shipping stations, to cut down on picker travel.
  • Regularly review and adjust your slotting strategy to adapt to changing demand patterns.
  • Think vertically. Maximizing vertical space with appropriate racking not only increases storage density but also makes more SKUs [stock-keeping units] accessible within a condensed footprint, further reducing travel.

Once you have an ideal layout, the next step is to start automating manual processes.

2. TRUST YOUR WORKERS WITH TECHNOLOGY

Technology comes into play in small and large ways to automate the picking function—from relatively easy-to-install voice-based picking solutions to collaborative picking robots and large-scale automated storage and retrieval systems (AS/RS). Barker and Sterner say voice technology is often the best place for companies to begin, noting that most companies see a 30% increase in productivity after implementing voice-directed picking systems—those in which employees receive picking instructions via a headset rather than having to read a printed list or handheld screen.

“People that use voice tend to really enjoy using voice,” Sterner explains. “It’s hands-free, eyes up—so you can focus on what you are doing. It keeps things moving and efficient. That’s always a strong play in the warehouse.”

These days, artificial intelligence (AI) plays a growing role as well. Sterner points out that AI tools can be integrated with warehouse technologies and used for inventory slotting and creating pick paths, based on the tools’ analysis and identification of “hot spots” in the warehouse. Agentic AI tools embedded into voice-directed picking technology can also help by answering pickers’ routine questions, like those involving procedure or protocol.

“[A picker] can ask the agentic AI a question: ‘How do I proceed?’ or ‘Hey, I see a stockout here; what should I do?’” Sterner explains. “[The worker] can ask that question, get an answer, and move on to the next pick [quickly]. It minimizes the disruption of going to ask someone.”

Companies that implement such technologies are likely to find themselves on the winning side of today’s recruiting and retention challenges, based on data from a separate industry survey that was also released late last year. Warehouse robotics company Exotec surveyed 400 U.S. warehouse workers and found that the vast majority embrace the idea of warehouse automation. Almost all of the respondents (98%) reported that automation makes them more productive, for instance, and nearly 70% said that automation-assisted tasks are more enjoyable than traditional, manual tasks. On top of that, nearly 60% reported a decrease in physical strain on their bodies thanks to automation.

That kind of job satisfaction makes workers stick around **ital{and} helps attracts more of them: The survey found that associates who work with automated equipment are more than three times as likely to stay at their job longer rather than leave early (36% versus 11%), for example, and that workers are nearly three times more likely to apply for jobs at warehouses with automation compared to those without (37% versus 13%).

Taking it a step further, Barker and Sterner note that workers can feel undervalued if they’re not being challenged or trusted with technology. Barker cites a Honeywell retail customer that conducted an internal survey of its warehouse workers, some of whom were given company-issued mobile devices as part of their jobs and some of whom were not. The latter group reported feeling less valued than their tech-enabled counterparts—which Barker says surprised both Honeywell and the retailer.

“It’s actually much more powerful than we thought it was. The fact that you award [an expensive] device to someone is very, very meaningful,” Barker says. “That sense of trust makes a difference. It’s a statement that we are investing in you.”

Automation can also lead to higher earnings. The Exotec survey found that nearly half of the workers surveyed (49%) had earned pay increases thanks to warehouse automation and 40% agreed that working with automated equipment increases the likelihood of getting a raise or promotion. Those benefits can help create a more stable workforce.

3. COMMUNICATE A CAREER PATH

Competitive pay rates are an effective recruiting and retention tool, to be sure, but they are not the only tools available—which is good news in an increasingly cost-conscious warehouse environment. The Instawork survey notes that warehouses must carefully balance the pressure to increase worker pay with the financial realities of rising costs for goods, transportation, and facility operations. The best way to do that is by exploring creative and cost-effective strategies to attract and retain talent, according to the survey. Those strategies could include offering flexible schedules, shift bonuses, or long-term career development opportunities.

“Balancing the needs of the workforce with the financial sustainability of the business will be essential for long-term success,” the authors wrote.

Barker and Sterner agree and emphasize the importance of demonstrating a clear growth path—for pickers as well as the broader warehouse workforce.

“Investing in training and development programs is essential,” Sterner says. “If [workers] don’t see a future path in the organization, that makes it difficult to bring them in and keep them.

“Help them grow in the position and show them a future path in the organization. Whenever workers feel supported and feel like there’s opportunity, they tend to stay. In the warehouse, that’s very important. Not everyone wants to come in and stay at the role that [they start with].”

FROM POWER DEFICIT TO POWER SUFFICIENT – EQ

In Short : Over the past decade, India has shifted from chronic power shortages to being largely power-sufficient by massively expanding electricity generation capacity and grid infrastructure. Installed capacity has nearly doubled since 2014, narrowing the gap between demand and supply to almost zero and allowing India to meet peak demand with no shortfall. This transition supports economic growth, universal electrification, and energy security.

In Detail : There is adequate availability of power in the country. Present installed generation capacity of the country is 513.730 GW. Government of India has addressed the critical issue of power deficiency by adding 289.607 GW of fresh generation capacity since April, 2014 transforming the country from power deficit to power sufficient.

The State/ UT-wise details of Power Supply Position, including Maharashtra, for the last three years and the current FY i.e. 2025-26 (upto December, 2025) are attached below. These details indicate that Energy Supplied has been commensurate to the Energy Requirement with only a marginal gap which is generally on account of constraints in the State transmission/distribution network. Hence there is no impact of shortage on the economy and industrial growth.

Further, Electricity being a concurrent subject, the supply and distribution of electricity to the various categories of consumers/areas/districts in a State/UT is within the purview of the respective State Government/Power Utility. The Central Government supplements the efforts of the State Governments by establishing power plants in Central Sector through Central Public Sector Undertakings (CPSUs) and allocating power from them to the various States / UTs.

The Government have taken the following steps to meet the increasing demand of electricity in the country:

1. Generation Planning:

  • As per National Electricity Plan (NEP), installed generation capacity in 2031-32 is likely to be 874 GW. With a view to ensure generation capacity remains ahead of projected peak demand, all the States, in consultation with CEA, have prepared their “Resource Adequacy Plans (RAPs)”, which are dynamic 10 year rolling plans and includes power generation as well as power procurement planning.
  • All the States were advised to initiate process for creating/ contracting generation capacities; from all generation sources, as per their Resource Adequacy Plans.
  • In order to augment the power generation capacity, the Government of India has initiated following capacity addition programme:

(A) The projected thermal (coal and lignite) capacity requirement by the year 2034–35 is estimated at approximately 3,07,000 MW as against the 2,11,855 MW installed capacity as on 31.03.2023. To meet this requirement, Ministry of Power has envisaged to set up an additional minimum 97,000 MW coal and lignite based thermal capacity.To meet this requirement, several initiatives have already been undertaken. Thermal capacities of around 17,360 MW have already been commissioned since April 2023 till 20.01.2026. In addition, 39,545 MW of thermal capacity (including 4,845 MW of stressed thermal power projects) is currently under construction. The contracts of 22,920 MW have been awarded and is due for construction. Further, 24,020 MW of coal and lignite-based candidate capacity has been identified which is at various stages of planning in the country.

(B)12,973.5 MW of Hydro Electric Projects are under construction. Further, 4,274 MW of Hydro Electric Projects are under various stage of planning and targeted to be completed by 2031-32.

(C) 6,600 MW of Nuclear Capacity is under construction and targeted to be completed by 2029-30. 7,000 MW of Nuclear Capacity is under various stages of planning and approval.

(D) 1,57,800 MW Renewable Capacity including 67,280 MW of Solar, 6,500 MW of Wind and 60,040 MW Hybrid power is under construction while 48,720 MW of Renewable Capacity including 35,440 MW of Solar and 11,480 MW Hybrid Power is at various stages of planning and targeted to be completed by 2029-30.

(E) In energy storage systems, 11,620 MW/69,720 MWh Pumped Storage Projects (PSPs) are under construction. Further, a total of 6,580 MW/39,480 MWh capacity of Pumped Storage Projects (PSPs) are concurred and yet to be taken up for construction. Currently, 9,653.94 MW/ 26,729.32 MWh Battery Energy Storage System (BESS) capacity are under construction and 19,797.65 MW/ 61,013.40 MWh BESS capacity are under tendering stage

2. Transmission Planning: Inter and Intra-State Transmission System has been planned and implementation of the same is taken up in matching time frame of generation capacity addition. As per the National Electricity Plan, about 1,91,474 ckm of transmission lines and 1,274 GVA of transformation capacity is planned to be added (at 220 kV and above voltage level) during the ten year period from 2022-23 to 2031-32.

3. Promotion of Renewable Energy Generation:

  • Inter State Transmission System (ISTS) charges have been waived for inter-state sale of solar and wind power for projects to be commissioned by 30th June 2025, for Green Hydrogen Projects till December 2030 and for offshore wind projects till December 2032.
  • Standard Bidding Guidelines for tariff based competitive bidding process for procurement of Power from Grid Connected Solar, Wind, Wind-Solar Hybrid and Firm &Dispatchable RE (FDRE) projects have been issued.
  • Renewable Energy Implementing Agencies (REIAs) are regularly inviting bids for procurement of RE power.
  • Foreign Direct Investment (FDI) has been permitted up to 100 percent under the automatic route.
  • To augment transmission infrastructure needed for steep RE trajectory, transmission plan has been prepared till 2032.
  • Laying of new intrastate transmission lines and creating new sub-station capacity has been funded under the Green Energy Corridor Scheme for evacuation of renewable power.
  • Scheme for setting up of Solar Parks and Ultra Mega Solar Power projects is being implemented to provide land and transmission to RE developers for installation of RE projects at large scale
  • Schemes such as Pradhan Mantri Kisan Urja Surakshaevam Utthaan Mahabhiyan (PM-KUSUM), PM Surya Ghar Muft Bijli Yojana, National Programme on High Efficiency Solar Dharti Aabha Janjatiya Gram Utkarsh Abhiyan (DA JGUA), National Green Hydrogen Mission, Viability Gap Funding (VGF) Scheme for Offshore Wind Energy Projects have been launched
  • To encourage RE consumption, Renewable Purchase Obligation (RPO) followed by Renewable Consumption Obligation (RCO) trajectory has been notified till 2029-30. The RCO which is applicable to all designated consumers under the Energy Conservation Act, 2001 will attract penalties on non-compliance.
  • “Strategy for Establishment of Offshore Wind Energy Projects” has been issued.
  • Green Term Ahead Market (GTAM) has been launched to facilitate sale of Renewable Energy Power through exchanges.
  • Production Linked Incentive (PLI) scheme has been launched to achieve the objective of localisation of supply chain for solar PV Modules.

The State-wise detail of Power Supply Position in the country in terms of Energy for the year 2022-23 and 2023-24.

State/

System /

Region

April, 2022 –  March, 2023 April, 2023 –  March, 2024
Energy Requirement Energy Supplied Energy not Supplied Energy Requirement Energy Supplied Energy not Supplied
( MU ) ( MU ) (MU) ( % ) (MU) ( MU ) (MU) ( % )
Chandigarh 1,788 1,788 0 0 1,789 1,789 0 0
Delhi 35,143 35,133 10 0 35,501 35,496 5 0
Haryana 61,451 60,945 506 0.8 63,983 63,636 348 0.5
Himachal Pradesh 12,649 12,542 107 0.8 12,805 12,767 38 0.3
Jammu & Kashmir 19,639 19,322 317 1.6 20,040 19,763 277 1.4
Punjab 69,522 69,220 302 0.4 69,533 69,528 5 0
Rajasthan 1,01,801 1,00,057 1,745 1.7 1,07,422 1,06,806 616 0.6
Uttar Pradesh 1,44,251 1,43,050 1,201 0.8 1,48,791 1,48,287 504 0.3
Uttarakhand 15,647 15,386 261 1.7 15,644 15,532 112 0.7
Northern Region 4,63,088 4,58,640 4,449 1 4,76,852 4,74,946 1,906 0.4
Chhattisgarh 37,446 37,374 72 0.2 39,930 39,872 58 0.1
Gujarat 1,39,043 1,38,999 44 0 1,45,768 1,45,740 28 0
Madhya Pradesh 92,683 92,325 358 0.4 99,301 99,150 151 0.2
Maharashtra 1,87,309 1,87,197 111 0.1 2,07,108 2,06,931 176 0.1
Dadra & Nagar Haveli and Daman & Diu 10,018 10,018 0 0 10,164 10,164 0 0
Goa 4,669 4,669 0 0 5,111 5,111 0 0
Western Region 4,77,393 4,76,808 586 0.1 5,17,714 5,17,301 413 0.1
Andhra Pradesh 72,302 71,893 410 0.6 80,209 80,151 57 0.1
Telangana 77,832 77,799 34 0 84,623 84,613 9 0
Karnataka 75,688 75,663 26 0 94,088 93,934 154 0.2
Kerala 27,747 27,726 21 0.1 30,943 30,938 5 0
Tamil Nadu 1,14,798 1,14,722 77 0.1 1,26,163 1,26,151 12 0
Puducherry 3,051 3,050 1 0 3,456 3,455 1 0
Lakshadweep 64 64 0 0 64 64 0 0
Southern Region 3,71,467 3,70,900 567 0.2 4,19,531 4,19,293 238 0.1
Bihar 39,545 38,762 783 2 41,514 40,918 596 1.4
DVC 26,339 26,330 9 0 26,560 26,552 8 0
Jharkhand 13,278 12,288 990 7.5 14,408 13,858 550 3.8
Odisha 42,631 42,584 47 0.1 41,358 41,333 25 0.1
West Bengal 60,348 60,274 74 0.1 67,576 67,490 86 0.1
Sikkim 587 587 0 0 544 543 0 0
Andaman- Nicobar 348 348 0 0.12914 386 374 12 3.18562
Eastern Region 1,82,791 1,80,888 1,903 1 1,92,013 1,90,747 1,266 0.7
Arunachal Pradesh 915 892 24 2.6 1,014 1,014 0 0
Assam 11,465 11,465 0 0 12,445 12,341 104 0.8
Manipur 1,014 1,014 0 0 1,023 1,008 15 1.5
Meghalaya 2,237 2,237 0 0 2,236 2,066 170 7.6
Mizoram 645 645 0 0 684 684 0 0
Nagaland 926 873 54 5.8 921 921 0 0
Tripura 1,547 1,547 0 0 1,691 1,691 0 0
North-Eastern Region 18,758 18,680 78 0.4 20,022 19,733 289 1.4
All India 15,13,497 15,05,914 7,583 0.5 16,26,132 16,22,020 4,112 0.3

The State-wise detail of actual Power Supply Position in the country in terms of Energy for the years 2024-25 and the current year 2025-26 (uptoDecember, 2025).

State/ April, 2024 –  March, 2025 April, 2025 –  December, 2025
System / Energy Requirement Energy Supplied Energy not Supplied Energy Requirement Energy Supplied Energy not Supplied
Region ( MU ) ( MU ) ( MU ) ( % ) ( MU ) ( MU ) ( MU ) ( % )
Chandigarh 1,952 1,952 0 0 1,509 1,509 1 0.0
Delhi 38,255 38,243 12 0 31,011 31,004 7 0.0
Haryana 70,149 70,120 30 0 55,932 55,867 65 0.1
Himachal Pradesh 13,566 13,526 40 0.3 10,295 10,259 36 0.3
Jammu & Kashmir 20,374 20,283 90 0.4 14,874 14,862 12 0.1
Punjab 77,423 77,423 0 0 60,852 60,811 41 0.1
Rajasthan 1,13,833 1,13,529 304 0.3 82,782 82,782 0 0.0
Uttar Pradesh 1,65,090 1,64,786 304 0.2 1,29,271 1,29,245 26 0.0
Uttarakhand 16,770 16,727 43 0.3 12,634 12,585 49 0.4
Northern Region 5,18,869 5,17,917 952 0.2 4,00,371 4,00,135 236 0.1
Chhattisgarh 43,208 43,180 28 0.1 31,484 31,475 8 0.0
Gujarat 1,51,878 1,51,875 3 0 1,18,066 1,18,066 0 0.0
Madhya Pradesh 1,04,445 1,04,312 133 0.1 75,024 75,017 7 0.0
Maharashtra 2,01,816 2,01,757 59 0 1,49,339 1,49,330 9 0.0
Dadra & Nagar Haveli and Daman & Diu 10,852 10,852 0 0 8,437 8,437 0 0.0
Goa 5,411 5,411 0 0 4,085 4,085 0 0.0
Western Region 5,28,924 5,28,701 223 0 3,96,482 3,96,458 24 0.0
Andhra Pradesh 79,028 79,025 3 0 59,580 59,574 6 0.0
Telangana 88,262 88,258 4 0 61,137 61,130 7 0.0
Karnataka 92,450 92,446 4 0 67,697 67,687 9 0.0
Kerala 31,624 31,616 8 0 22,947 22,945 2 0.0
Tamil Nadu 1,30,413 1,30,408 5 0 99,673 99,664 10 0.0
Puducherry 3,549 3,549 0 0 2,693 2,690 3 0.1
Lakshadweep 68 68 0 0 54 54 0 0.0
Southern Region 4,25,373 4,25,349 24 0 3,13,762 3,13,724 38 0.0
Bihar 44,393 44,217 176 0.4 37,299 37,283 15 0.0
DVC 25,891 25,888 3 0 18,590 18,587 3 0.0
Jharkhand 15,203 15,126 77 0.5 11,717 11,711 6 0.1
Odisha 42,882 42,858 24 0.1 34,037 34,032 5 0.0
West Bengal 71,180 71,085 95 0.1 56,921 56,888 32 0.1
Sikkim 574 574 0 0 378 378 0 0.0
Andaman- Nicobar 425 413 12 2.9 316 299 17 5.5
Eastern Region 2,00,180 1,99,806 374 0.2 1,58,986 1,58,924 62 0.0
Arunachal Pradesh 1,050 1,050 0 0 909 909 0 0.0
Assam 12,843 12,837 6 0 10,973 10,973 0 0.0
Manipur 1,079 1,068 10 0.9 863 861 3 0.3
Meghalaya 2,046 2,046 0 0 1,542 1,542 0 0.0
Mizoram 709 709 0 0 559 559 0 0.0
Nagaland 938 938 0 0 772 772 0 0.0
Tripura 1,939 1,939 0 0 1,523 1,523 0 0.0
North-Eastern Region 20,613 20,596 16 0.1 17,227 17,224 3 0.0
All India 16,93,959 16,92,369 1,590 0.1 12,86,829 12,86,465 363 0.0

This Information was given by The Minister of State in the Ministry of Power, Shri Shripad Naik, in a written reply in the Lok Sabha today.

Tesla’s First Ever Annual Revenue Drop Is Not The Concerning Part

29 January 2026 at 01:41

Tesla has now published its 4th quarter and full-year financial details and various updates on vehicle models, robots, factories, and its energy business. Steve Hanley is going to cover some of the vehicle and robot news, so I’m jumping into the finances. One of the big headline stories is that ... [continued]

The post Tesla’s First Ever Annual Revenue Drop Is Not The Concerning Part appeared first on CleanTechnica.

Tesla’s First Ever Annual Revenue Drop Is Not The Concerning Part

29 January 2026 at 01:41

Tesla has now published its 4th quarter and full-year financial details and various updates on vehicle models, robots, factories, and its energy business. Steve Hanley is going to cover some of the vehicle and robot news, so I’m jumping into the finances. One of the big headline stories is that ... [continued]

The post Tesla’s First Ever Annual Revenue Drop Is Not The Concerning Part appeared first on CleanTechnica.

Robots: Still making strides in logistics

30 January 2026 at 14:00



Makers of humanoid robots are targeting logistics, specifically the warehouse, as they continue a steady march to integrate their human-looking machines into today’s increasingly automated workplaces. That’s because research shows that the labor-intensive warehouse is a promising market for the still-nascent technology, which mimics the human body and can perform a range of material handling and order fulfillment tasks.


U.K.-based research firm IDTechEx projects logistics and warehousing will be the second-largest adopter of humanoid robots over the next 10 years, following just behind the automotive industry (see Exhibit 1). Key benefits in the warehouse include bringing precision and consistency to repetitive tasks and improving speed while minimizing human error, the company said in an October market outlook report.

“Facing acute labor shortages and rising operational complexity, warehouses are turning to humanoids as a promising solution,” according to the report. “The benefits are multifaceted: Humanoid robots help lower labor costs, reduce operational disruptions, and offer unmatched flexibility, capable of adapting to varying tasks throughout the day.”

But the research also tells a deeper story: As of last year, humanoid robot deployment in warehouses remained below 5%, due to both technological and commercial roadblocks. Short operating time and long recharge cycles can create substantial downtime, for instance, while limited field testing and safety concerns have left many end-users cautious. A separate industry study, by U.K. researcher Interact Analysis, predicts humanoid robot growth will be relatively slow in the short term, reaching about 40,000 shipments globally by 2032.

“The humanoid robot market is currently experiencing substantial hype, fueled by a large addressable market and significant investment activity,” Rueben Scriven, research manager at Interact Analysis, wrote in the 2025 report. “However, despite the potential, our outlook remains cautious due to several key barriers that hinder widespread adoption, including high prices and the gap in the dexterity needed to match human productivity levels, both of which are likely to persist into the next decade. However, we maintain that there’s a significant potential in the mid- to long term.”

Challenges aside, the work to develop and deploy humanoids continues, with many companies hitting major milestones in 2025 and early 2026. Here’s a look at some of the most recent accomplishments.

DIGIT GETS BUSY

Humanoid robots resemble the human body—in general, they have a torso, head, and two arms and legs, but they can also replicate just portions of the body. Robotic arms can be considered humanoid, as can bots that feature an upper body on a wheeled base. The bipedal variety—those that can walk on two legs—are gaining momentum.

Agility Robotics announced late last year that its bipedal humanoid robot, called Digit, had moved more than 100,000 totes in a commercial environment—at a GXO Logistics facility in Flowery Branch, Georgia. Just a few weeks later, the company said it would deploy Digit robots in San Antonio, Texas, to handle fulfillment operations for e-commerce fulfillment platform Mercado Libre. The companies said they plan to explore additional uses for Digit across Mercado Libre’s warehouses in Latin America. They did not give a timeframe for the rollout.

Agility’s humanoid robots are also in use at facilities run by Amazon and German motion technology company Schaeffler.

Agility is a business unit of Humanoid Global Holdings, which includes robotic companies Cartwheel Robotics, RideScan Ltd., and Formic Technologies Inc. in its portfolio of businesses.

ALPHA BIPEDAL TAKES OFF

U.K.-based robotics and AI (artificial intelligence) developer Humanoid launched its first bipedal robot this past December, introducing HMND 01 Alpha Bipedal. The robot went from design to working prototype in just five months and was up and walking just 48 hours after final assembly—a feat that typically takes weeks or even months, according to the bot’s developers.

Alpha Bipedal stands five feet, 10 inches tall and can carry loads of 33 pounds in its arms. Still in testing, the bot is designed to tackle industrial, household, and service tasks.

“HMND 01 is designed to address real-world challenges across industrial and home environments,” Artem Sokolov, founder and CEO of Humanoid, said in a December statement announcing the launch. “With manufacturing sectors facing labor shortages of up to 27%, leaving significant gaps in production, and millions of people performing physically demanding or repetitive tasks, robots can provide meaningful support. In domestic environments, they have the potential to assist elderly people or those with physical limitations, helping with object handling, coordination, and daily activities. Every day, over 16 billion hours are spent on unpaid domestic and care work worldwide—work that, if valued economically, would exceed 40% of GDP in some countries. By taking on these responsibilities, humanoid robots can free humans to focus on higher-value and safer work, improving their productivity and quality of life.”

HMND 01 Alpha Bipedal follows the September launch of Humanoid’s wheeled Alpha platform, which has been tested commercially and helped extend the company’s reach from industrial and logistics tasks—including warehouse automation, picking, and palletizing—to domestic support applications.

AGILE ONE TAKES OFF

Robotic automation company Agile Robots launched its first humanoid robot, called Agile One, in November. The robot is designed to work in industrial settings, where company leaders say it can operate safely and efficiently alongside humans and other robotic solutions. The bot’s key tasks include material gathering and transport, pick-and-place operations, machine tending, tool use, and fine manipulation.

Agile One will be manufactured at the company’s facilities in Germany.

“At Agile Robots, we believe the next industrial revolution is Physical AI: intelligent, autonomous, and flexible robots that can perceive, understand, and act in the physical world,” Agile Robots’ CEO and founder, Dr. Zhaopeng Chen, said in a statement announcing the launch. “Agile One embodies this revolution.”

The new humanoid is part of the company’s wider portfolio of AI-driven robotic systems, which includes robotic hands and arms as well as autonomous mobile robots (AMRs) and automated guided vehicles (AGVs). All are driven by the company’s AI software platform, AgileCore, and are designed to work together.

“The real value for our industrial customers isn’t just a stand-alone intelligent humanoid, but an entire intelligent production system,” Chen said in the statement. “We see [Agile One] working seamlessly alongside our other robotic solutions, each part of the system, connected and learning from each other. This approach of applying Physical AI to whole production systems can give our customers a new level of holistic efficiency and quality.”

Full production of Agile One begins this year.

​Safety first: Industry updates standards for humanoid robots


As two-legged and four-legged robots begin to find applications in supply chain operations, the sector is refining its safety standards to ensure that humanoid and collaborative robots can be deployed at scale, according to a December report from Interact Analysis.

The work is necessary because the unique mechanics associated with legged robotics introduce new challenges around stability, fall dynamics, and unpredictable motion, according to report author Clara Sipes, a market analyst at Interact Analysis. To be precise, unlike statically stable machines, dynamically stable machines such as humanoids collapse when power is cut, creating residual risk in the event of a fall.

In response, new standards such as the International Organization for Standardization’s ISO 26058-1 and ISO 25785-1 have been developed to address both statically and dynamically stable mobile robotics. In addition, ISO TR (Technical Report) R15.108 examines the challenges associated with bipedal, quadrupedal, and wheeled balancing mobile robots.

According to the Interact Analysis report, one of the most notable shifts is the removal of references to “collaborative modes.” In the most recent revisions, collaborative robots must be evaluated based on the application, not the robot alone, since each application carries its own risks, and the standard now encourages assessing the entire environment within which the robot operates.

Additional changes cover requirements for improved cyber resilience, the report said. European regulatory changes, particularly the Cyber Resilience Act (CRA), AI Act, and Machinery Regulation, are establishing a unified framework for safety, cybersecurity, and risk management. That will shape the future of industrial automation by addressing new vulnerabilities within products that are increasingly connected to a network.

In its report, Interact Analysis advised manufacturers and integrators in the robotic sector to prepare early for the upcoming standards revisions. With multiple regulations taking effect over the next few years, organizations that begin aligning now will avoid costly redesigns and rushed compliance efforts later, the report noted.

—Ben Ames, Senior News Editor

Transportation and logistics providers see 2026 as critical year for technology to transform business processes

29 January 2026 at 17:48



In his 40 years leading McLeod Software, one of the nation’s largest providers of transportation management systems for truckers and 3PLs (third-party logistics providers), Tom McLeod has seen many a new technology product introduced with much hype and promise, only to fade in real-world practice and fail to mature into a productive application.

In his view, as new tech players have come and gone, the basic demand from shippers and trucking operators for technology has remained pretty much the same, straightforwardly simple and unchanged over time: “Find me a way to use computers and software to get more done in less time and [at a] lower cost,” he says.

“It’s been the same goal, from decades ago when we replaced typewriters, all the way to today finding ways to use artificial intelligence (AI) to automate more tasks, streamline processes, and make the human worker more efficient,” he adds. “Get more done in less time. Make people more productive.”

The difference between now and the pretenders of the past? McLeod and others believe that AI is the real thing and, as it continues to develop and mature, will be incorporated deeper into every transportation and logistics planning, execution, and supply chain process, fundamentally changing and forcing a reinvention of how shippers and logistics service providers operate and manage the supply chain function.

“But it is not a magic bullet you can easily switch on,” McLeod cautions. “While the capabilities look magical, at some level it takes time to train these models and get them using data properly and then come back with recommendations or actions that can be relied upon,” he adds.

THE DATA CONUNDRUM

One of the challenges is that so much supply chain data today remains highly unstructured—by one estimate, as much as 75%. Converting and consolidating myriad sources and formats of data, and ensuring it is clean, complete, and accurate remains perhaps the biggest challenge to accelerated AI adoption.

Often today when a broker is searching for a truck, entering an order, quoting a load, or pulling a status update, someone is interpreting that text or email, extracting information from the transportation management system (TMS), and creating a response to the customer, explains Doug Schrier, McLeod’s vice president of growth and special projects. “With AI, what we can do is interpret what the email is asking for, extract that, overlay the TMS information, and use AI to respond to the customer in an automated fashion,” he says.

To come up with a price quote using traditional methods might take three or four minutes, he’s observed. An AI-enabled process cuts that down to five seconds. Similarly, entering an order into a system might take four to five minutes. With AI interpreting the email string and other inputs, a response is produced in a minute or less. “So if you are doing [that task] hundreds of times a week, it makes a difference. What you want to do is get the human adding the value and [use AI] to get the mundane out of the workflow.”

Yet the growth of AI is happening across a technology landscape that remains fragmented, with some solutions that fit part of the problem, and others that overlap or conflict. Today it’s still a market where there is not one single tech provider that can be all things to all users.

In McLeod’s view, its job is to focus on the mission of providing a highly functional primary TMS platform—and then complement and enhance that with partners who provide a specialized piece of an ever-growing solution puzzle. “We currently have built, over the past three decades, 150 deep partnerships, which equates to about 250 integrations,” says Ahmed Ebrahim, McLeod’s vice president of strategic alliances. “Customers want us to focus on our core competencies and work with best-of-breed parties to give them better choices [and a deeper solution set] as their needs evolve,” he adds.

One example of such a best-of-breed partnership is McLeod’s arrangement with Qued, an AI-powered application developer that provides McLeod TMS clients with connectivity and process automation for every load appointment scheduling mode, whether through a portal, email, voice, or text.

Before Qued was integrated, there were about 18 steps a user had to complete to get an appointment back into the TMS, notes Tom Curee, Qued’s president. With Qued, those steps are reduced to virtually zero and require no human intervention.

As soon as a stop is entered into the TMS, it is immediately and automatically routed to Qued, which reaches out to the scheduling platform or location, secures the appointment, and returns an update into the TMS with the details. It eliminates manual appointment-making tasks like logging on and entering data into a portal, and rekeying or emailing, and it significantly enhances the value and efficiency of this particular workflow activity for McLeod users.

LEGACY SYSTEM PAIN

One of the effects of the three-year freight recession has been its impact on investment. Whereas in better times, logistics and trucking firms would focus on buying tech to reduce costs, enhance productivity, and improve customer service, the constant financial pressure has narrowed that focus.

“First and exclusively, it is now on ‘How do we create efficiency by replacing people and really bring cost levels down because rates are still extremely low and margins really tight,’” says Bart De Muynck, a former Gartner research analyst covering the visibility and supply chain tech space, and now principal at consulting firm Bart De Muynck LLC.

Most industry operators he’s spoken with have looked at AI. One example he cites as ripe for transformation is freight brokerages, “where you have rows and rows of people on the phone.” They are asking the question “Which of these processes or activities can we do with AI?”

Yet De Muynck points to one issue that is proving to be a roadblock to change and transformation. “For many of these companies, their foundational technology is still on older architectural platforms,’’ in some cases proprietary ones, he notes. “It’s hard to combine AI with those.” And because of years of low margins and cash flow restrictions, “they have not been able to replace their core ERP [enterprise resource planning system] or the TMS for that carrier or broker, so they are still running on very old tech.”

For those players, De Muynck says they will discover a disconcerting reality: the difficulty of trying to apply AI on a platform that is decades old. “That will yield some efficiencies, but those will be short term and limited in terms of replacing manual tasks,” he says.

The larger question, De Muynck says, is “How do you reinvent your company to become more successful? How do we create applications and processes that are based on the new architecture so there is a big [transformative] lift and shift [and so we can implement and deploy foundational pieces fairly quickly]? Then with those solutions build something with AI that is truly transformational and effective.” And, he adds, bring the workforce along successfully in the process.

“People have some things in their jobs they have to do 100 times a day,” often a menial or boring task, De Muynck adds. “AI can automate or streamline those tasks in such a way that it improves the employee’s work experience and job satisfaction, while driving efficiencies. [Rather than eliminate a position], brokers can redirect worker time to more higher-value, complex tasks that need human input, intuition, and leadership.”

“With logistics, you cannot take people completely out of the equation,” he emphasizes. “[The best AI solutions] will be a human paired up with an intelligent AI agent. It will be a combination of people [and their tribal knowledge and institutional experience] and technology,” he predicts.

EYES OPEN

Shippers, truckers, and 3PLs are experiencing an awakening around the possibilities of technologies today and what modern architecture, in-the-cloud platforms, and AI-powered agents can do, says Ann Marie Jonkman, vice president–industry advisory for software firm Blue Yonder. For many, the hardest decision is where to start. It can be overwhelming, particularly in a market environment shaped by chaos, uncertainty, and disruption, where surviving every week sometimes seems a challenge in itself.

“First understand and be clear about what you want to achieve and the problems you want to solve” with a tech strategy, she advises. “Pick two or three issues and develop clear, defined use cases for each. Look at the biggest disruptions—where are the leakages occurring and how do I start?”

Among the most frequently targeted areas of investment she sees are companies putting capital and resources into broad areas of automation, not just physical activity with robotics, but in business processes, workflows, and operations. It also is about being able to understand tradeoffs, getting ahead of and removing waste, and moving the organization from a reactionary posture to one that’s more proactive and informed, and can leverage what Jonkman calls “decision velocity.” That places a priority on not only connecting the silos, but also on incorporating clean, accurate, and actionable data into one command center or control tower. When built and deployed correctly, such central platforms can provide near-immediate visibility into supply chain health as well as more efficient and accurate management of the end-to-end process.

Those investments in supply chain orchestration not only accelerate and improve decision-making around stock levels, fulfillment, shipping choices, and overall network and partner performance, but also provide the ability to “respond to disruption and get a handle on the data to monitor and predict disruption,” Jonkman adds. It’s tying together the nodes and flows of the supply chain so “fulfillment has the order ready at the right place and the right time [with the right service]” to reduce detention and ensure customer expectations are met.

It is important for companies not to sit on the sidelines, she advises. Get into the technology transformation game in some form. “Just start somewhere,” even if it is a small project, learn and adapt, and then go from there. “It does not need to be perfect. Perfection can be the enemy of success.”

The speed of technology innovation always has been rapid, and the advent of AI and automation is accelerating that even further, observes Jason Brenner, senior vice president of digital portfolio at FedEx. “We see that as an opportunity, rather than a challenge.”

He believes one of the industry’s biggest challenges is turning innovation into adoption, “ensuring new capabilities integrate smoothly into existing operations and deliver value quickly.” Brenner adds that in his view, “innovation is healthy and pushes everyone forward.”

Execution at scale is where the rubber meets the road. “Delivering technology that works reliably across millions of shipments, geographies, and constantly changing conditions requires deep operational integration, massive data sets, and the ability to test solutions in multiple environments,” he says. “That’s where FedEx is uniquely positioned.”

DEFYING AUTOMATION NO MORE

Before the arrival of the newest forms of AI, “there were shipping tasks that had defied automation for decades,” notes Mark Albrecht, vice president of artificial intelligence for freight broker and 3PL C.H. Robinson. “Humans had to do this repetitive, time-consuming—I might even say mind-numbing—yet essential work.”

Application of early forms of AI, such as machine learning tools and algorithms, provided a hint of what was to come. CHR, which has one of the largest in-house IT development groups in the industry, has been using those for a decade.

Large language models and generative AI were the next big leap. “It’s the advent of agentic AI that opens up new possibilities and holds the greatest potential for transformation in the coming year,” Albrecht says, adding, “Agentic AI doesn’t just analyze or generate content; it acts autonomously to achieve goals like a human would. It can apply reasoning and make decisions.”

CHR has built and deployed more than 30 AI agents, Albrecht says. Collectively, they have performed millions of once-manual tasks—and generated significant benefits. “Take email pricing requests. We get over 10,000 of those a day, and people used to open each one, read it, get a quote from our dynamic pricing engine, and send that back to the customer,” he notes. “Now a proprietary AI agent does that—in 32 seconds.”

Another example is load tenders. “It used to take our people upwards of four hours to get to those through a long queue of emails,” he recalls. That work is now done by an AI agent that reads the email subject line, body, and attachments; collects other needed information; and “turns it into an order in our system in 90 seconds,” Albrecht says. He adds that if the email is for 20 orders, “the agent can handle them simultaneously in the same 90 seconds,” whereas a human would have to handle them sequentially.

Time is money for the shipper at every step of the logistics process. So the faster a rate quote is provided, order created, carrier selected, and load appointment scheduled, the greater the benefits to the shipper. “It’s all about speed to market, which whether a retailer or manufacturer, often translates into if you make the sale or keep an assembly line rolling.”

LOOKING AHEAD

Strip away all the hype, and the one tech deliverable that remains table stakes for all logistics providers and their customers are platforms that provide a timely and accurate view into where goods are and with whom, and when they will get to their destination. “First and foremost is real-time visibility that enables customer access to the movement of their product across the supply chain,” says Penske Executive Vice President Mike Medeiros. “Then, getting further upstream and allowing them to be more agile and responsive to disruptions.”

As for AI, “it’s not about replacing [workers]; it’s about pointing them in the right direction and helping [them] get more done in the same amount of time, with a higher level of service and enabling a more satisfying work experience. It’s human capital complemented by AI-powered agents as virtual assistants. We’ve already [started] down that path.”

Small solar project for Maine Habitat for Humanity branch makes big difference

28 January 2026 at 19:29

In Maine, the Habitat for Humanity of Waldo County (HFHWC) has installed a solar project at its recently opened ReStore that will offset 100% of the facility’s electricity use. The 18.92-kW system was installed in partnership with nonprofit solar provider Everybody Solar. The solar project will enable HFHWC to direct more resources toward building and…

The post Small solar project for Maine Habitat for Humanity branch makes big difference appeared first on Solar Power World.

NVIDIA DLSS 4.5 Delivers Super Resolution Upgrades and New Dynamic Multi Frame Generation

14 January 2026 at 14:00
NVIDIA DLSS 4 with Multi Frame Generation has become the fastest-adopted NVIDIA gaming technology ever. Over 250 games and apps use it to make real-time path...

NVIDIA DLSS 4 with Multi Frame Generation has become the fastest-adopted NVIDIA gaming technology ever. Over 250 games and apps use it to make real-time path tracing possible—and upcoming titles for 2026, including PRAGMATA and Resident Evil Requiem, also plan to incorporate the software. At CES 2026, the technology became even more powerful. NVIDIA introduced DLSS 4.5…

Source

How to Get Started with Neural Shading for Your Game or Application

13 November 2025 at 19:55
For the past 25 years, real-time rendering has been driven by continuous hardware improvements. The goal has always been to create the highest fidelity image...

For the past 25 years, real-time rendering has been driven by continuous hardware improvements. The goal has always been to create the highest fidelity image possible within 16 milliseconds. This has fueled significant innovation in graphics hardware, pipelines, and renderers. But the slowing pace of Moore’s Law mandates the invention of new computational architectures to keep pace with the…

Source

Malaysia Starts Initial Phase of Electric Bus Re-fleeting, Targeting 1,100 Units by 2030

21 January 2026 at 03:27

Over a thousand electric buses will be introduced to Malaysia’s road network, starting with the capital Kuala Lumpur, as the country’s long-stalled push to decarbonize public transport is finally breaking out of pilot mode. In a report by the Malaysian Transport Ministry, Prasarana Malaysia Berhad, the country’s dominant public transport ... [continued]

The post Malaysia Starts Initial Phase of Electric Bus Re-fleeting, Targeting 1,100 Units by 2030 appeared first on CleanTechnica.

Human Error in Cybersecurity and the Growing Threat to Data Centers

19 January 2026 at 17:00

Cyber incidents haven’t ceased to escalate in 2025, and they keep making their presence felt more and more impactfully as we transition into 2026. The quick evolution of novel cyber threat trends leaves data centers increasingly exposed to disruptions extending beyond the traditional IT boundaries.

The Uptime Institute’s annual outage analysis shows that in 2024, cyber-related disruptions occurred at roughly twice the average rate seen over the previous four years. This trend aligns with findings from Honeywell’s 2025 Cyber Threat Report, which identified a sharp increase in ransomware and extortion activity targeting operational technology environments based on large-scale system data.

There are many discussions today around infrastructure complexity and attack sophistication, but it’s a lesser-known reality that human error in cybersecurity remains a central factor behind many of these incidents. Routine configuration changes, access decisions, or decisions taken under stress can create conditions that allow errors to sneak in. Looking at high-availability environments, human error often becomes the point at which otherwise contained threats begin to escalate into bigger problems.

As cyberattacks on data centers continue to grow in number, downtime is carrying heavier and heavier financial and reputational consequences. Addressing human error in cybersecurity means recognizing that human behavior plays a direct role in how a security architecture performs in practice. Let’s take a closer look.

How  Attackers Take Advantage of Human Error in Cybersecurity

Cyberattacks often exploit vulnerabilities that stem from both superficial, maybe even preventable mistakes, as well as deeper, systemic issues. Human error in cybersecurity often arises when established procedures are not followed through consistently, which can create gaps that attackers are more than eager to exploit. A delayed firmware update or not completing maintenance tasks can leave infrastructure exposed, even when the risks are already known. And even if organizations have defined policies to reduce these exposures, noncompliance or insufficient follow-through often weakens their effectiveness.

In many environments, operators are aware that parts of their IT and operational technology infrastructure carry known weaknesses, but due to a lack of time or oversight, they fail to address them consistently. Limited training also adds to the problem, especially when employees are expected to recognize and respond to social engineering techniques. Phishing, impersonation, and ransomware attacks are increasingly targeting organizations with complex supply chains and third-party dependencies, and in these situations, human error often enables the initial breach, after which attackers move laterally through systems, using minor mistakes to trigger disruptions.

Why Following Procedures is Crucial

Having policies in place doesn’t always guarantee that the follow-through will be consistent. In everyday operations, teams often have to juggle many things at once: updates, alerts, and routine maintenance, and small steps can be missed unintentionally. Even experienced staff can make these kinds of mistakes, especially when managing large or complex environments over an extended period of time. Gradually, these small oversights can add up and leave systems exposed.

Account management works similarly. Password rules, or the policies for the handling of inactive accounts are usually well-defined; however, they are not always applied homogeneously. Dormant accounts may go unnoticed, and teams can fall behind on updates or escape regular review. Human error in cybersecurity often develops step by step through workloads, familiarity, and everyday stress, and not because of a lack of skill or awareness.

The Danger of Interacting With Social Engineering Without Even Knowing

Social engineering is a method of attack that uses deception and impersonation to influence people into revealing information or providing access. It relies on trust and context to make people perform actions that appear harmless and legitimate at the moment.

The trick of deepfakes is that they mirror everyday communication very accurately. Attackers today have all the tools to impersonate colleagues, service providers, or internal support staff. A phone call from someone claiming to be part of the IT help desk can easily seem routine, especially when framed as a quick fix or standard check. Similar approaches can be seen in emails or messaging platforms, and the pattern is the same: urgency overrides safety.

With the various new tools available, visual deception has become very common. Employees may be directed to login pages that closely resemble internal systems and enter credentials without hesitation. Emerging techniques like AI-assisted voice or video impersonation further blur the line between legitimate requests and malicious activity, making social engineering interactions very difficult to recognize in real time.

Ignoring Security Policies and Best Practices

It’s not enough if security policies exist only as formal documentation, but are not followed consistently on the floor. Sometimes, even if access procedures are defined, employees under the pressure of time can make undocumented exceptions. Access policies, or change management rules, for example, require peer review and approval, but urgent maintenance or capacity pressures often lead to decisions that bypass those steps.

These small deviations create gaps between how systems are supposed to be protected and how they are actually handled. When policies become situational or optional, security controls lose their purpose and reliability, leaving the infrastructure exposed, even though there’s a mature security framework in place.

When Policies Leave Room for Interpretation

Policies that lack precision introduce variability into how security controls are applied across teams and shifts. When procedures don’t explicitly define how credentials should be managed on shared systems, retained login sessions, or administrative access can remain in place beyond their intended scope. Similarly, if requirements for password rotation or periodic access reviews are loosely framed or undocumented, they are more likely to be deferred during routine operations.

These conditions rarely trigger immediate alerts or audit findings. However, over time, they accumulate into systemic weaknesses that expand the attack surface and increase the likelihood of attacks.

Best Practices That Erode in Daily Operations

Security issues often emerge through slow, incremental changes. When operational pressure increases, teams might want to rely on more informal workarounds to keep everything running. Routine best practices like updates, access reviews, and configuration standards can slip down the priority list or become sloppy in their application. Individually, all of these decisions can seem reasonable at the moment; over time, however, they do add up and dilute the established safeguards, which leaves the organization exposed even without a single clearly identifiable incident.

Overlooking Access and Offboarding Control

Ignoring best practices around access management introduces the next line of risks. Employees and third-party contractors often retain privileges beyond their active role if offboarding steps are not followed through. In the lack of clear deprovisioning rules, like disabling accounts, dormant access can linger on unnoticed. These inactive accounts are not monitored closely enough to detect and identify if misuse or compromise happens.

Policy Gaps During Incident Response

The consequences of ignoring procedures become most visible when an actual cybersecurity incident occurs. When teams are forced to act quickly without clear guidance, errors start to surface. Procedures that are outdated, untested, or difficult to locate offer little support during an emergency. There’s no policy that can eliminate risks completely, however, organizations that treat procedures as living, enforceable tools are better positioned to respond effectively when an incident occurs.

A Weak Approach to Security Governance

Weak security governance often allows risks to persist unnoticed, especially when oversight from management is limited or unclear. Without clear ownership and accountability, routine tasks like applying security patches or reviewing alerts can be delayed or overlooked, leaving systems exposed. These seemingly insignificant gaps create an environment over time in which vulnerabilities are known but not actively addressed.

Training plays a very important role in closing this gap, but only when it is treated as part of governance,and not as an isolated activity. Regular, structured training helps employees develop a habit of verification and reinforces the checks and balances defined by organizational policies. To remain effective, training has to evolve in tandem with the threat landscape. Employees need ongoing exposure to emerging attack techniques and practical guidance on how to recognize and respond to them within their daily workflows. Aligned governance and training help organizations position themselves better to reduce risk driven by human factors.

Understanding the Stakes

Human error in cybersecurity is often discussed as a collection of isolated missteps, but in reality, it reflects how people operate within complex systems under constant pressure.

In data center environments, these errors rarely occur as isolated events but are influenced by interconnected processes, tight timelines, and attackers who deliberately exploit trust, familiarity, and routine behavior. Looking at it from this angle, human error doesn’t show only individual mistakes but provides insight into how risks develop across an organization over time.

Recognizing the role of human error in cybersecurity is essential for reducing future incidents, but awareness alone is not enough. Training also plays an important role, but it cannot compensate for unclear processes, weak governance, or a culture that prioritizes speed more than safety.

Data center operators have to continuously adapt their security practices and reinforce expectations through daily operations instead of treating security best practices as rigid formalities. Building a culture where employees understand how their actions influence security outcomes helps organizations respond more effectively to evolving threats and limits the conditions that allow small errors to turn into major, devastating incidents.

# # #

About the Author

Michael Zrihen  is the Senior Director of Marketing & Internal Operations Manager at Volico Data Centers.

The post Human Error in Cybersecurity and the Growing Threat to Data Centers appeared first on Data Center POST.

❌