The AI revolution is pushing the data center industry toward gigawatt-scale campuses. But the real question today is not how large a facility can be built. The real question is how quickly power can be converted into revenue.
Consider a 1 gigawatt data center project. One gigawatt equals one thousand megawatts of capacity. In today’s market, typical infrastructure costs for large data centers range between 8 million and 12 million dollars per megawatt for standard facilities. That places the infrastructure cost of a 1 GW campus between 8 billion and 12 billion dollars.
In many U.S. markets, developers are seeing costs closer to 10 to 14 million dollars per megawatt, which would place a 1 GW campus between 10 and 14 billion dollars. AI optimized data centers can be even more expensive due to high density racks, liquid cooling systems, and larger electrical infrastructure. Those facilities can reach 15 to 20 million dollars per megawatt, pushing a 1 GW campus to 15 to 20 billion dollars in infrastructure alone.
Once servers, GPUs, networking equipment, and storage are installed, the total project value can easily exceed 30 billion dollars. But capital cost is no longer the biggest constraint, energy is.
According to the International Energy Agency, global data center electricity consumption reached roughly 415 terawatt hours in 2024, representing about 1.5 percent of global electricity demand. That number is projected to approach 800 terawatt hours by 2030 as AI adoption accelerates. At the same time, power infrastructure is struggling to keep up. The United States interconnection queue alone now exceeds 2 terawatts of generation capacity waiting for approval, and in many regions new grid connections can take three to six years. This creates a major financial challenge for traditional hyperscale development.
Large buildings are often constructed years before sufficient power becomes available. Hundreds of megawatts of capacity can sit idle while developers wait for substations, transmission lines, and utility upgrades. On a one gigawatt campus that could mean billions of dollars tied up in infrastructure waiting for power.
Now compare that with a modular campus strategy.
Instead of constructing massive buildings designed for the full gigawatt from day one, the campus can be deployed incrementally as power becomes available. A one gigawatt campus could begin with a 20 megawatt deployment. Using the same industry pricing ranges, that first deployment would require between 160 and 240 million dollars at eight to twelve million dollars per megawatt, or up to 300 to 400 million dollars if the facility is designed for high density AI workloads. What makes this model powerful is how quickly revenue can begin.
In many markets AI capacity is leasing between 150 thousand and 250 thousand dollars per megawatt per month depending on location and density. A 20 megawatt deployment can therefore generate roughly 3 to 5 million dollars per month, or approximately 36 to 60 million dollars per year, while the rest of the campus continues expanding. Instead of waiting years for a massive hyperscale facility to be completed, the project can begin generating revenue within 12 to 18 months.
As additional power becomes available the campus grows from twenty megawatts to one hundred megawatts, then several hundred megawatts, and eventually the full one gigawatt capacity. By the time the campus reaches full scale, the project may already be generating hundreds of millions of dollars annually.
There is also another strategic advantage that is becoming increasingly important: mobility of infrastructure.
If power availability changes, new energy sources come online, or grid constraints shift to another region, modular facilities can be redeployed where energy exists. Massive fixed hyperscale buildings cannot move.
This dramatically changes the risk profile.
Traditional hyperscale development concentrates 10 to 20 billion dollars into a single permanent structure. Modular campuses distribute capital across infrastructure that scales directly with available power.
In a world where energy has become the limiting factor for digital growth, the future of hyperscale development may not be one giant building. It may be gigawatt scale campuses built from modular infrastructure designed to grow with power.
# # #
About the Author
Kliton Agolli Co-Founder, Board Member & Director of Global Growth Northstar Technologies Group | Naples, Florida.
Kliton Agolli is a senior security and international business development executive with more than 35 years of experience operating at the intersection of national security, executive protection, counterintelligence, and global commercial expansion. His career spans military service, law enforcement, VIP and diplomatic protection, healthcare and hospitality security, and cross-border business development in complex and high-risk environments.
At Northstar Technologies Group, Mr. Agolli leads global growth strategy, international partnerships, and strategic market expansion. He plays a key role in aligning advanced security and infrastructure technologies with government, defense, healthcare, and mission-critical commercial clients worldwide. His work focuses on risk-informed growth, regulatory compliance, and building long-term strategic alliances across Europe, the Middle East, and the United States.
Metro Connect USA 2026 brought the digital infrastructure community together in Fort Lauderdale, Florida, Feb. 23 to 25, as executives, investors and network operators gathered to discuss the evolving connectivity landscape. Over three days, conversations across keynote sessions, panels and private meetings focused on how the industry is adapting to the rapid growth of artificial intelligence, cloud services and bandwidth demand.
The 2026 event drew more than 3,700 decision-makers representing over 1,200 companies, reflecting the scale of collaboration and investment shaping the next phase of digital infrastructure development in the United States.
Artificial intelligence was a central theme throughout the conference. Industry leaders discussed how AI workloads are driving new requirements for data center capacity, fiber connectivity and power infrastructure. As AI adoption expands beyond hyperscale environments into enterprise applications and edge deployments, operators are facing increasing pressure to scale networks capable of supporting high-volume data movement and compute-intensive workloads.
Fiber infrastructure also remained a key topic. Discussions throughout the event highlighted continued investment in metro fiber expansion, long-haul backbone routes and fiber-to-the-home networks. As cloud platforms, streaming services and AI applications generate greater data traffic, fiber continues to serve as the underlying foundation supporting the digital economy.
Several speakers addressed how infrastructure and investment strategies are evolving alongside these shifts. Marc Ganzi, Chief Executive Offer at DigitalBridge discussed the continued influx of capital into digital infrastructure and the importance of disciplined investment as the sector scales. Steve Smith, Chief Executive Officer at Zayo Group highlighted the role of fiber expansion in supporting enterprise connectivity and hyperscale demand. Alex Hernandez, CEO of PowerBridge, participated in discussions focused on the growing power demands associated with AI infrastructure, including how utilities, data center developers and investors are working to expand power capacity and modernize energy delivery to support large-scale computing environments.
From the investment perspective, Santhosh Rao, Managing Director, Head of Digital Infrastructure at MUFG explored the evolving capital structures supporting infrastructure development, including structured financing and private credit solutions. Anton Moldan, Senior Managing Director at Macquarie Group shared insights into how institutional investors continue to evaluate digital infrastructure assets as a long-term growth opportunity within global infrastructure portfolios.
Beyond the formal sessions, Metro Connect remains known for its highly productive networking environment. Thousands of meetings took place across the event’s exhibit floor, private meeting rooms and curated networking gatherings, reinforcing the conference’s reputation as a place where partnerships are formed and transactions begin.
Outside the formal sessions, attendees spent much of the week engaged in meetings and informal discussions across the venue’s networking areas. Many participants noted that the event continues to serve as a gathering point for companies exploring partnerships, investment opportunities and infrastructure projects.
Looking ahead, the industry will reconvene next year as Metro Connect USA 2027 moves to a new venue. The event will take place February 8–10, 2027 at the Diplomat Beach Resort in Hollywood, Florida.
At Metro Connect USA 2026, held February 22-25 in Fort Lauderdale, Marc Ganzi, Chief Executive Officer of DigitalBridge, delivered a keynote outlining how artificial intelligence is reshaping the digital infrastructure industry. In his address, “Digital Infra 3.0: Building the AI Industrial Revolution,” Ganzi described how the sector is evolving from a connectivity-focused market into a broader ecosystem that includes data centers, fiber networks, edge computing, and energy infrastructure.
Ganzi emphasized that AI has moved beyond hype and is beginning to generate measurable outcomes across industries. While much of the public discussion focuses on applications and large language models, he noted that the true monetization of AI will occur through enterprise and industrial use cases. Manufacturing, agriculture, healthcare, and transportation are already integrating AI-driven automation, robotics, and predictive analytics to improve productivity and efficiency.
These developments rely on a layered infrastructure environment. Hyperscale facilities train AI models, while edge data centers support inferencing workloads closer to where data is used. Fiber networks provide the low-latency connectivity required to move massive volumes of data between locations, and wireless systems connect devices and sensors in the physical world. Beneath all of these components sits an increasingly critical factor: power.
Power availability was a central theme of Ganzi’s keynote. As AI workloads grow, electricity demand is rising faster than grid capacity can keep pace. The digital infrastructure industry is now leasing significantly more power than the grid can bring online each year, creating a widening gap between supply and demand. As a result, developers are increasingly operating as energy strategists, exploring diversified energy approaches that may include microgrids, battery storage, solar, wind, and natural gas generation.
The search for reliable power is also influencing where new infrastructure is built. While traditional hubs such as Northern Virginia remain central to the industry, developers are exploring additional markets where grid access and energy availability make large-scale AI deployments possible. In many cases, power availability has become the deciding factor in site selection.
Despite the focus on energy, Ganzi reminded the audience that connectivity remains essential to the AI economy. The ability to move enormous amounts of data across networks continues to depend on high-capacity fiber infrastructure and low-latency connectivity. Even as AI advances in software and hardware, the underlying network infrastructure remains fundamental.
Ganzi also described the evolution of AI infrastructure in phases. The industry has moved through the early stage of training large language models and is now entering a period where inferencing and edge deployments are expanding. The next stage will involve integrating AI directly into physical environments, where intelligent systems control machines, robotics, and automated processes across multiple industries.
As the sector expands, developers face growing challenges that include power constraints, permitting delays, supply chain pressures, water usage concerns, and increased scrutiny from investors. Ganzi stressed that success will depend on operational discipline, strong customer relationships, and the ability to deliver infrastructure projects reliably and on schedule.
Ultimately, he framed the current moment as the beginning of Digital Infra 3.0, a phase in which digital infrastructure converges with traditional infrastructure to support the AI economy. As AI adoption accelerates, the companies that successfully combine power, connectivity, and compute will play a defining role in building the foundation for the next era of global digital infrastructure.
The discussion around digital infrastructure, connectivity, and AI will continue at the next major Capacity event, International Telecoms Week (ITW) in Washington, D.C., May 18-21, 2026.
AI spending is accelerating at a pace most enterprise budgets simply can’t match. While IT leaders are under pressure to deliver transformative AI capabilities, their capital budgets aren’t growing at the same rate as these AI ambitions. This mismatch is forcing difficult trade-offs: delayed projects, stretching aging infrastructure beyond its intended lifecycle, and diverting funding from other critical initiatives.
But there is another option. Increasingly, IT leaders are turning to technology leasing as a savvy strategy to help expedite AI adoption without sacrificing operational agility or financial liquidity.
AI: Thinking Through the Dollars and Sense
From my vantage point, working closely with IT leaders across industries, I hear the lament. AI infrastructure is expensive and highly concentrated, particularly GPU-based compute power. A single GPU cluster designed to support large-scale AI workloads can cost hundreds of thousands to millions. For enterprise-wide deployments, total data center investments can easily reach $150 million and as much as $500 million.
For mid-tier enterprises, challenges are even greater, as many lack the balance-sheet strength to secure traditional credit for such large capital expenditures. Some resort to private equity or high-interest lenders. But even those who can afford to purchase the infrastructure outright are frustrated by the pace of AI innovation; and the risk of technology becoming quickly outdated or obsolete.
For determined IT leaders, the question is not whether to invest in AI infrastructure, but how to fund it without compromising the broader IT roadmap. This is where the financing strategy becomes just as important as the technology strategy.
IT leasing eases these pressures in several critical ways:
Minimizing upfront costs. Traditional purchasing requires a massive outlay of capital, sometimes forcing companies to scale back or winnow down the scope of projects despite urgent demand. Leasing converts that one-time expense into predictable monthly payments. Instead of committing $50 million upfront, an organization can structure payments over time, freeing capital for additional initiatives and allowing multiple AI projects to move forward simultaneously.
Enhancing flexibility and reducing financial risk. Purchased technology sits on the balance sheet and depreciates over a fixed period. If business needs shift or the organization upgrades early, it can trigger book losses. Leasing – when structured properly – can classify equipment as an operating expense, keeping it off the balance sheet and enabling companies to pivot more easily without the burden of carrying these assets.
Lease the Entire AI Stack, Not Just the Hardware
IT leaders recognize today’s AI deployments extend far beyond servers. Enterprises are leasing high-performance GPU servers optimized for AI model training and inference, along with high-speed networking equipment, enterprise storage systems, integrated “rack and roll” data center solutions, firewalls, and AI-specific software.
Maintenance contracts, security tools, and embedded applications can all be incorporated into a single lease structure.
This bundling delivers administrative and compliance benefits. Hardware typically carries a residual value often 10–15% below purchase cost, amortized across the lease term. Software licenses and other “soft costs” are included in payments and expire at term end, eliminating resale complications. Clients are responsible only for the hardware at lease completion, simplifying compliance and ensuring security updates, patches, and licenses remain current throughout the lifecycle.
Combat Obsolescence Before It Becomes a Liability
One of the most common concerns I hear from executives is technology obsolescence. And given the pace of AI, where innovation cycles are measured in months, not years, that concern is justified.
Leasing naturally enforces a rigor and discipline for countering obsolescence. A three- or four-year term creates a defined decision point: extend, buy out or upgrade the technology. This prevents the “set it and forget it” ownership mindset that often leads to aging, unsupported systems and expensive, reactive refresh cycles. In AI environments, delaying upgrades can multiply total costs through inefficiencies and lost competitive advantage.
Leasing is a Budget Multiplier
Looking ahead to 2026 and beyond, IT leaders must think differently about capital allocation. No one can predict what the AI landscape will look like in three years. Owning large volumes of rapidly depreciating infrastructure can limit strategic agility.
Leaders must also factor in the full lifecycle cost of AI infrastructure, which includes equipment refreshes, secure data wiping, asset disposition, and regulatory compliance. These factors carry operational and financial burdens when assets are owned outright.
The most important priority today is building a strategy that enables AI adoption with minimal upfront cost and maximum flexibility. Leasing can act as a budget multiplier. Instead of exhausting capital on one large acquisition, organizations can deploy that same funding across predictable monthly payments, preserving liquidity while expanding total project capacity. In doing so, IT leaders maintain momentum across their complete technology roadmap, ensuring AI transformation doesn’t come at the expense of operational resilience.
# # #
About the Author
Frank Sommers brings 30 years of experience in the IT leasing industry, working closely with global enterprise organizations to help them modernize infrastructure while preserving capital and accelerating technology adoption. Known for consistently exceeding sales targets, Frank has also developed and led numerous successful vendor financing programs in partnership with major resellers, creating flexible acquisition models that support complex IT environments. His deep expertise in IT lifecycle management, financing strategies, and enterprise procurement has made him a trusted advisor across the industry. A former collegiate soccer player at Cal Poly San Luis Obispo, Frank brings the same competitiveness and teamwork to every client relationship.
The rapid evolution of artificial intelligence has moved from a software trend to a massive physical infrastructure challenge. While headlines often focus on the gigawatt-scale builds of hyperscalers, a significant portion of the AI boom is occurring in the “mid-market” -enterprise data centers, regional colocation hubs, and edge facilities. For these small-to-mid-scale (SMS) operators, the challenge of hosting high-performance graphics processing units (GPUs) and AI accelerators exceeding thermal design powers of 1,000 watts is even more acute. Unlike hyperscalers with dedicated research teams, SMS players must find ways to adapt existing “brownfield” infrastructure to manage unprecedented heat without the luxury of starting from scratch.
The Mid-Market Liquid Cooling Transition
For decades, air cooling was the “flat and boring” standard for the computer centers found in banks, universities, and regional hosting firms. However, as rack densities climb from a traditional 5 kW toward 50 kW or even 100 kW, traditional air-conditioning methods are reaching a physical ceiling. In fact, 2026 is seeing a surge in retrofit activity as colocation sites struggle to let mixed densities coexist efficiently.
The primary hurdle for SMS operators is not just the cooling capacity itself, but the operational complexity and capital investment required for a liquid-cooled transition. Many operators are now moving toward integrated cooling platforms that bridge the building’s traditional chilled-water loop and the new high-density server racks.
The CDU as a Bridge for Existing Facilities
At the center of this shift is the Coolant Distribution Unit (CDU). For a mid-market operator, the CDU acts as a critical thermal “bridge.” A liquid-to-liquid CDU effectively isolates the facility’s existing water loop from the sensitive, high-value electronics via a secondary fluid network (SFN).
This isolation is particularly valuable for colocation and enterprise sites because it allows managers to precisely control the fluid chemistry, flow rate, and temperature for a specific “GPU-heavy” cluster without needing to overhaul the entire building’s plumbing. In-rack CDUs, in particular, offer targeted cooling with a smaller footprint and simplified deployment, making them ideal for the edge or regional high-density setups where floor space is at a premium.
Reliability Through Precision Chemistry
For smaller teams with fewer on-site cooling specialists, the coolant formulation itself becomes a strategic reliability factor. Standard water or traditional glycols often lack the long-term material compatibility required for modern direct-to-chip, where incompatible metals can trigger galvanic corrosion.
SMS operators are increasingly adopting next-generation coolants that match high-performance specifications while offering a lower carbon footprint. To manage these complex fluids, advanced telemetry – such as Ecolab’s 3D TRASAR™ technology – can now be built directly into smart CDUs. This “connected coolant” approach monitors pH, conductivity, and glycol concentration in real-time, allowing smaller teams to shift from reactive maintenance to proactive adjustments. By automating these checks, operators can extend maintenance intervals and significantly reduce the risk of early-life failures.
Stewardship as a Strategic Requirement
As data centers increasingly embed themselves in metro and suburban locations to support low-latency AI, they face rising community scrutiny regarding resource use. Small-to-mid-scale operators must now balance Power Usage Effectiveness (PUE) with Water Usage Effectiveness (WUE) to maintain their social license to operate.
A roadmap for this shift is visible in programs like Microsoft’s Community-First AI Infrastructure initiative, launched in early 2026. This framework commits to five core pillars, including concrete promises to minimize operational water consumption and replenish more water than facilities withdraw. For SMS players, following these stewardship best practices is not just about ethics; it is about securing permits and ensuring long-term operational resilience in power- and water-constrained regions.
Future-Proofing with “Cooling as a Service”
To overcome the “high CapEx” barrier of liquid cooling, many operators are turning to service-led models like Cooling as a Service (CaaS). These models convert complex thermal management stacks into predictable, auditable outcomes. By leveraging specialized vendors who handle commissioning, fluid analysis, and real-time monitoring, SMS data centers can scale their AI capabilities as quickly as the platforms change, without over-engineering their facilities for an uncertain future.
Ultimately, the transition to liquid cooling is not just for the giants of the industry. By integrating smart hardware, precision chemistry, and service-based models, small-to-mid-scale operators can bridge the density gap and reliably host the next generation of mission-critical AI workloads.
# # #
About the Author
Bob Walicki is an innovation leader with nearly 20 years of experience in research, development and engineering at Ecolab, a global leader in water and infection prevention solutions. He is currently responsible for driving innovation for Ecolab’s Global High Tech Data Centers segment. Most of Bob’s career has been oriented to solving customer problems related to industrial water treatment and utilization in many industries, including Mining and Mineral Processing through application of novel chemistries as well as intelligent automation and digital solutions. He holds a Bachelor of Science degree in Chemistry from the University of Notre Dame as well as a Master’s of Science and a PhD in Physical Chemistry from the University of Chicago.
While power dominates the headlines in AI infrastructure, water is the silent arbiter of project viability. Investors and developers obsess over megawatts and grid capacity, but the reality is that cooling systems are tethered to a resource that is often less predictable and more politically charged. When water or wastewater capacity hits a ceiling, the fallout moves beyond engineering. It triggers permitting stalls, operational interruptions, and structural impairment of asset value.
Across the U.S., municipalities are no longer just providing service; they are becoming the ultimate ‘gatekeepers’ for high-volume users. For instance, Tucson now requires any new or expanding large water user expecting more than 7.4 million gallons per month to submit a conservation plan, undergo public review, and secure City Council approval before accessing Tucson Water.
Marana’s policy further states that Marana Water will not supply potable water to data centers for cooling and requires documentation of an alternate source. In Chandler, the city council unanimously rejected a proposal to rezone land for a 422,000-square-foot AI data center campus after public opposition emphasized water use, noise, and limited local benefit.
Strategically positioned between engineering and financial close, these water policies represent a major ‘blind spot’ for developers. Late-stage discovery of water limitations results in stranded capital and protracted entitlement delays. For modern investors, such water risk is now a primary underwriting variable that can dictate the viability of an entire transaction.
Why Power Is Only Half the Constraint
Power determines how much IT load can be energized, but cooling determines whether that load can operate within temperature limits on peak summer days. Cooling design also determines whether the site depends on local water, meaning the true constraint is rarely singular.
Data centers typically rely on one of two primary heat rejection approaches.
Evaporative systems, such as cooling towers, remove heat through water evaporation. This requires continuous makeup water to replace evaporative loss and generates blowdown to control mineral concentration. Blowdown becomes a wastewater stream, tying the facility to sewer capacity, discharge regulations, and pretreatment requirements.
Dry systems, such as air-cooled chillers and dry coolers, reduce direct on-site water consumption but increase electrical demand as outdoor temperatures rise, particularly during summer peaks. That shift moves the constraint toward grid capacity and power pricing during the very hours when electricity is most expensive and constrained. In both configurations, the constraint does not disappear but shifts, and each approach carries a distinct exposure profile that must be evaluated at the basin and grid level.
Inside the Water Footprint of AI Data Centers
Water exposure extends beyond the visible intake line and is often more complex than initial site reviews suggest.
In tower-based systems, make-up water demand rises as ambient temperatures increase because more heat must be rejected during peak hours. Blowdown volumes also rise, increasing steady wastewater discharge. In many jurisdictions, wastewater capacity determines viability before raw water supply does. Dissolved solids and treatment chemistry can trigger pretreatment mandates or exceed plant acceptance thresholds, creating operational bottlenecks that were not modeled at the outset.
The true water footprint of an asset is often obscured by ‘siloed’ diligence. While a facility might minimize on-site usage, it remains tethered to the water intensity of the local energy mix—a dependency that creates a hidden risk during peak demand. Because most models consider water, power, and wastewater as isolated variables, the full scale of the water-energy nexus is rarely consolidated. This leaves the project exposed to systemic failure points that only become visible late in the development cycle.
Why Water Risk Is Frequently Mispriced
The assumption that water is a stable, predictable utility is a significant blind spot in traditional underwriting. Standard diligence often stops at a letter of intent from a provider, ignoring regulatory contingencies—such as recycled water mandates or peak-heat restrictions—that govern high-intensity facilities. Failing to account for these municipal requirements leads to Capex volatility and structural delays, turning a simple utility expense into a primary threat to projected returns.
At a portfolio level, aggregated corporate reporting can obscure localized exposure. Average water intensity metrics do not reveal whether specific assets sit in basins facing physical scarcity or wastewater systems operating near capacity. Valuations that assume perpetual expansion can fail at the local level when additional allocation is unavailable, undermining long-term growth assumptions embedded in underwriting models.
From Environmental Constraint to Financial Exposure
Water risk tends to accumulate over time, moving through operations, regulation, and local politics until it becomes a real constraint on performance.
For operators, the first pressure points are often summer peaks, when supply limits tighten and water quality can swing at the exact moment cooling systems are working hardest. This dilemma then leads to emergency operational changes that pull maintenance forward, or take short outages. Ultimately, the revenue impact of those decisions is usually disproportionate to the duration of the disruption.
For developers, on the other hand, regulatory shifts can trigger midstream redesigns. A project engineered around potable water may be required to transition to reclaimed supply, adding infrastructure, storage, and treatment complexity after capital has already been committed.
Public opposition at the local level introduces political friction that stalls approvals and compounds reputational risk. Contentious infrastructure upgrades can derail project schedules and force unfavorable cost-sharing renegotiations. Collectively, these municipal factors feed into underwriting through increased delay risk, Capex volatility, and a diminished capacity for long-term expansion.
What Needs to Change in Infrastructure Planning
Water must be evaluated at the same stage as power during site screening and early design.
A simple confirmation of water availability is no longer sufficient. Basin-level allocation rules, drought contingency plans, wastewater capacity, discharge quality requirements, and embedded grid water intensity must be assessed before engineering assumptions are finalized.
Every investment memo and design review should include a transparent water balance that identifies source type, volume requirements, discharge pathways, and regulatory triggers under peak conditions. This allows engineering and underwriting teams to evaluate exposure in parallel rather than sequentially.
Water limits are now shaping asset values in a direct, measurable way. Resilience starts with expansion plans that can hold up under tighter supply caps, and with capital that funds backup sourcing options and protection against shifting rules. Financing and insurance need to move to basin-by-basin risk models, because water availability is already the deciding factor in approvals and the constraint that most reliably dictates whether an asset can keep performing over time.
# # #
About the Author
Dr. Vian Sharif is the Founder and President of NatureAlpha, an AI-first fintech platform delivering science-based environmental risk insights across nearly $3 trillion in assets under management. With 20 years of experience at the intersection of finance, technology, and sustainability, she also serves as Head of Sustainability at FNZ Group and is a global advisor on nature-aligned investing. She holds a PhD in Environmental Behavior Change and was recognized with a 2025 Fin-Earth Award for Natural Capital and Biodiversity.
A veteran forensic consultant’s patent-pending platform is exposing the hidden scheduling failures that silently destroy value across every major infrastructure project in America.
Every year, billions of dollars in construction value are destroyed; not by bad materials, not by incompetent workers, not even by unforeseen site conditions. They are destroyed by scheduling failures that nobody caught in time.
Ricardo Hinojos has spent more than two decades in the field, from underground utilities to hyperscale data centers serving some of the most demanding clients on the planet. In project after project, he kept seeing the same quiet crisis: schedules that looked clean on paper but were riddled with deficiencies invisible to the human eye — missing logic ties, resource conflicts, unrealistic durations, and cascading risks that would not surface until millions of dollars were already committed.
The industry accepted this as normal. Mr. Hinojos refused to.
The Data Tells a Brutal Story
In forensic work analyzing over $3.2 billion in construction projects, RHSS found that more than 70 percent of construction schedules contain critical deficiencies; errors significant enough to compromise project delivery, inflate costs, and expose owners to litigation. These are not minor formatting issues. These are logic gaps that cause downstream collapse. Duration assumptions that defy physics. Resource allocations that exist only on paper.
For hyperscale data center construction, where a single day of delay can cost hundreds of thousands of dollars in lost revenue, this is not merely an operational problem. It is a financial crisis in slow motion. Amazon, Google, Microsoft, and the broader hyperscale ecosystem are racing to bring capacity online against unprecedented demand, with the margin for error shrinking every quarter.
Why Traditional Scheduling Tools Are Not Enough
Primavera P6. Microsoft Project. Oracle. These are powerful platforms. But they are instruments, not intelligence. They record what you tell them. They do not question whether what you told them is right. The gap between a schedule that looks compliant and one that is defensible has always required a seasoned expert to bridge. Until now, that meant expensive consultants, weeks of review, and subjective judgment calls that did not always hold up in court or in client meetings.
The construction industry has been waiting, perhaps without realizing it, for something categorically better.
A New Paradigm: Predictive Schedule Intelligence
The platform developed by Ricardo Hinojos Scheduling Solutions represents what the firm calls predictive schedule intelligence, an AI-powered validation system purpose-built for the complexity of hyperscale infrastructure projects. The patent-pending system achieves 91 percent accuracy in identifying schedule deficiencies before they become field problems. It does not merely flag errors; it predicts cascading impacts, generates litigation-grade documentation, and produces defensible forensic analysis at a fraction of the time and cost of traditional methods.
“This is not a bolt-on feature for an existing platform,” Mr. Hinojos said. “It is a ground-up rethinking of how construction intelligence should work. The industry has accepted preventable failure for too long.”
What RHSS Delivers
Automated Schedule Validation: Quality checks against DCMA 14-Point analysis, contract specifications, and industry standards, completed in hours rather than weeks.
AI-Driven Resource Loading: Manpower forecasting and crew productivity analysis tied to real-world RS Means labor data across all construction disciplines.
Forensic Delay Analysis: Court-ready documentation and defensible delay analysis built to withstand litigation, arbitration, and regulatory scrutiny.
Earned Value Integration: Real-time project health visibility through EVM metrics calibrated for hyperscale data center construction workflows.
The Bigger Picture
We are entering an era where the organizations that build the fastest, most reliably, and most cost-effectively will not simply be the ones with the best labor or the best materials. They will be the ones with the best intelligence systems. RHSS was built precisely for this moment, and for the clients, partners, and technology companies that recognize what is at stake in the race to deliver the infrastructure that powers the modern economy.
# # #
About the Author
Ricardo Hinojos is a Certified Forensic Construction Consultant (CFCC) with 20+ years in construction project management and forensic consulting. He specializes in hyperscale data center construction scheduling, forensic delay analysis, and AI-powered project intelligence. He holds a patent-pending AI schedule validation system achieving 91% accuracy across $3.2 billion in analyzed projects and serves as an expert witness in construction delay litigation and arbitration.
AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.
In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.
That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.
AI’s place in cooling controls
Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.
AI can now be applied across four core cooling functions:
Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.
A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.
Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.
The scope of AI adoption
While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.
This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.
Predictive versus prescriptive: the maturity divide
AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.
Table 1 Predictive, prescriptive, and autonomous AI in data center cooling
Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.
Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.
The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.
AI’s value and its risks
No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:
Predicting and preventing temperature excursions before they affect uptime.
Detecting system degradation early and enabling timely corrective action.
Optimizing energy performance under varying load profiles without violating SLA thresholds.
Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.
At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.
The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.
Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.
This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.
These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.
Outlook
Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.
Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.
The Uptime Intelligence View
AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.
The state of the freight economy, rise of artificial intelligence (AI), and accelerating levels of fraud across the trucking industry topped the agenda at SMC3 JumpStart26, an annual supply chain education event held in Atlanta earlier this week.
JumpStart brings together professionals from across the less-than-truckload (LTL) industry for three days of networking, presentations, and panel discussions on the issues affecting the industry. More than 500 people turned out for the event, which was held at the Renaissance Atlanta Waverly, January 26-28.
The freight economy continues to be marked by uncertainty, despite some bright spots on the broader economic horizon, according to economist Keith Prather of Armada Corporate Intelligence, who gave an economic update on the first day of the conference. Prather cited consumer spending on services rather than goods, tariff volatility, and an unhealthy housing market as persistent drags on the freight economy. Bright spots include anticipated tax refunds that may boost consumer spending on goods later this year, a slowly improving residential construction market that could help spur freight movement, and strong GPD growth heading into 2026.
Touting AI
AI dominated much of the discussion over the three days, with LTL freight carriers, third-party logistics services (3PL) providers, and technology companies detailing how the technology can be used to improve operations within companies and across the industry. Speakers included Mark Albrecht, vice president of artificial intelligence and enterprise strategy at 3PL C.H. Robinson.
Albrecht’s talk coincided with the company’s launch of AI agents aimed at combatting missed LTL pickups. Two new AI agents are tracking down missed pickups and using advanced reasoning to determine how to keep freight moving, according to a January 26 company announcement. C.H. Robinson said it has automated 95% of checks on missed LTL pickups, saving more than 350 hours of manual work per day, helping shippers’ freight move up to a day faster, and reducing unnecessary return trips to pick up missed freight by 42%. The tools are part of a fleet of more than 30 AI agents C.H. Robinson has developed in house to streamline LTL processes.
Cracking down on fraud
The conference also featured an interview with Derek Barrs, administrator of the Federal Motor Carrier Safety Administration (FMCSA). Barrs addressed the widespread fraud affecting the trucking industry, discussing how FMCSA is working with states and other federal agencies to combat safety problems arising from several issues, including the issuing of non-domiciled commercial drivers licenses (CDLs), English-language proficiency enforcement, and entry-level driver training programs.
Barrs said FMCSA is working with states to ensure the enforcement of existing English-language proficiency regulations and that “thousands and thousands” of drivers have been placed out of service as a result. FMCSA is also working with states to review their processes for issuing non-domiciled CDLs, which may be granted to non-citizens living in the United States. Much of the problem centers around states issuing licenses to non-citizens for periods of time that exceed their legal status to work in the country. Barrs said most states have stopped issuing non-domiciled CDLs while those processes are being reviewed but said much work remains to fix breakdowns in the system.
Barrs said FMCSA is focused on rooting out bad actors in driver training as well, noting that he agency has removed 6,800 listings from its training provider registry to date and that investigations of driver training schools continue.
Show organizers cited a strong turnout for the event despite being affected by winter storm Fern, which resulted in thousands of flight cancellations nationwide and widespread power outages in the Southeast. The crowd of more than 500 attendees was down from an expected group of more than 700 registrants.
SMC3 will hold its annual Connections event this coming June in Palm Beach, Fla.
In his 40 years leading McLeod Software, one of the nation’s largest providers of transportation management systems for truckers and 3PLs (third-party logistics providers), Tom McLeod has seen many a new technology product introduced with much hype and promise, only to fade in real-world practice and fail to mature into a productive application.
In his view, as new tech players have come and gone, the basic demand from shippers and trucking operators for technology has remained pretty much the same, straightforwardly simple and unchanged over time: “Find me a way to use computers and software to get more done in less time and [at a] lower cost,” he says.
“It’s been the same goal, from decades ago when we replaced typewriters, all the way to today finding ways to use artificial intelligence (AI) to automate more tasks, streamline processes, and make the human worker more efficient,” he adds. “Get more done in less time. Make people more productive.”
The difference between now and the pretenders of the past? McLeod and others believe that AI is the real thing and, as it continues to develop and mature, will be incorporated deeper into every transportation and logistics planning, execution, and supply chain process, fundamentally changing and forcing a reinvention of how shippers and logistics service providers operate and manage the supply chain function.
“But it is not a magic bullet you can easily switch on,” McLeod cautions. “While the capabilities look magical, at some level it takes time to train these models and get them using data properly and then come back with recommendations or actions that can be relied upon,” he adds.
THE DATA CONUNDRUM
One of the challenges is that so much supply chain data today remains highly unstructured—by one estimate, as much as 75%. Converting and consolidating myriad sources and formats of data, and ensuring it is clean, complete, and accurate remains perhaps the biggest challenge to accelerated AI adoption.
Often today when a broker is searching for a truck, entering an order, quoting a load, or pulling a status update, someone is interpreting that text or email, extracting information from the transportation management system (TMS), and creating a response to the customer, explains Doug Schrier, McLeod’s vice president of growth and special projects. “With AI, what we can do is interpret what the email is asking for, extract that, overlay the TMS information, and use AI to respond to the customer in an automated fashion,” he says.
To come up with a price quote using traditional methods might take three or four minutes, he’s observed. An AI-enabled process cuts that down to five seconds. Similarly, entering an order into a system might take four to five minutes. With AI interpreting the email string and other inputs, a response is produced in a minute or less. “So if you are doing [that task] hundreds of times a week, it makes a difference. What you want to do is get the human adding the value and [use AI] to get the mundane out of the workflow.”
Yet the growth of AI is happening across a technology landscape that remains fragmented, with some solutions that fit part of the problem, and others that overlap or conflict. Today it’s still a market where there is not one single tech provider that can be all things to all users.
In McLeod’s view, its job is to focus on the mission of providing a highly functional primary TMS platform—and then complement and enhance that with partners who provide a specialized piece of an ever-growing solution puzzle. “We currently have built, over the past three decades, 150 deep partnerships, which equates to about 250 integrations,” says Ahmed Ebrahim, McLeod’s vice president of strategic alliances. “Customers want us to focus on our core competencies and work with best-of-breed parties to give them better choices [and a deeper solution set] as their needs evolve,” he adds.
One example of such a best-of-breed partnership is McLeod’s arrangement with Qued, an AI-powered application developer that provides McLeod TMS clients with connectivity and process automation for every load appointment scheduling mode, whether through a portal, email, voice, or text.
Before Qued was integrated, there were about 18 steps a user had to complete to get an appointment back into the TMS, notes Tom Curee, Qued’s president. With Qued, those steps are reduced to virtually zero and require no human intervention.
As soon as a stop is entered into the TMS, it is immediately and automatically routed to Qued, which reaches out to the scheduling platform or location, secures the appointment, and returns an update into the TMS with the details. It eliminates manual appointment-making tasks like logging on and entering data into a portal, and rekeying or emailing, and it significantly enhances the value and efficiency of this particular workflow activity for McLeod users.
LEGACY SYSTEM PAIN
One of the effects of the three-year freight recession has been its impact on investment. Whereas in better times, logistics and trucking firms would focus on buying tech to reduce costs, enhance productivity, and improve customer service, the constant financial pressure has narrowed that focus.
“First and exclusively, it is now on ‘How do we create efficiency by replacing people and really bring cost levels down because rates are still extremely low and margins really tight,’” says Bart De Muynck, a former Gartner research analyst covering the visibility and supply chain tech space, and now principal at consulting firm Bart De Muynck LLC.
Most industry operators he’s spoken with have looked at AI. One example he cites as ripe for transformation is freight brokerages, “where you have rows and rows of people on the phone.” They are asking the question “Which of these processes or activities can we do with AI?”
Yet De Muynck points to one issue that is proving to be a roadblock to change and transformation. “For many of these companies, their foundational technology is still on older architectural platforms,’’ in some cases proprietary ones, he notes. “It’s hard to combine AI with those.” And because of years of low margins and cash flow restrictions, “they have not been able to replace their core ERP [enterprise resource planning system] or the TMS for that carrier or broker, so they are still running on very old tech.”
For those players, De Muynck says they will discover a disconcerting reality: the difficulty of trying to apply AI on a platform that is decades old. “That will yield some efficiencies, but those will be short term and limited in terms of replacing manual tasks,” he says.
The larger question, De Muynck says, is “How do you reinvent your company to become more successful? How do we create applications and processes that are based on the new architecture so there is a big [transformative] lift and shift [and so we can implement and deploy foundational pieces fairly quickly]? Then with those solutions build something with AI that is truly transformational and effective.” And, he adds, bring the workforce along successfully in the process.
“People have some things in their jobs they have to do 100 times a day,” often a menial or boring task, De Muynck adds. “AI can automate or streamline those tasks in such a way that it improves the employee’s work experience and job satisfaction, while driving efficiencies. [Rather than eliminate a position], brokers can redirect worker time to more higher-value, complex tasks that need human input, intuition, and leadership.”
“With logistics, you cannot take people completely out of the equation,” he emphasizes. “[The best AI solutions] will be a human paired up with an intelligent AI agent. It will be a combination of people [and their tribal knowledge and institutional experience] and technology,” he predicts.
EYES OPEN
Shippers, truckers, and 3PLs are experiencing an awakening around the possibilities of technologies today and what modern architecture, in-the-cloud platforms, and AI-powered agents can do, says Ann Marie Jonkman, vice president–industry advisory for software firm Blue Yonder. For many, the hardest decision is where to start. It can be overwhelming, particularly in a market environment shaped by chaos, uncertainty, and disruption, where surviving every week sometimes seems a challenge in itself.
“First understand and be clear about what you want to achieve and the problems you want to solve” with a tech strategy, she advises. “Pick two or three issues and develop clear, defined use cases for each. Look at the biggest disruptions—where are the leakages occurring and how do I start?”
Among the most frequently targeted areas of investment she sees are companies putting capital and resources into broad areas of automation, not just physical activity with robotics, but in business processes, workflows, and operations. It also is about being able to understand tradeoffs, getting ahead of and removing waste, and moving the organization from a reactionary posture to one that’s more proactive and informed, and can leverage what Jonkman calls “decision velocity.” That places a priority on not only connecting the silos, but also on incorporating clean, accurate, and actionable data into one command center or control tower. When built and deployed correctly, such central platforms can provide near-immediate visibility into supply chain health as well as more efficient and accurate management of the end-to-end process.
Those investments in supply chain orchestration not only accelerate and improve decision-making around stock levels, fulfillment, shipping choices, and overall network and partner performance, but also provide the ability to “respond to disruption and get a handle on the data to monitor and predict disruption,” Jonkman adds. It’s tying together the nodes and flows of the supply chain so “fulfillment has the order ready at the right place and the right time [with the right service]” to reduce detention and ensure customer expectations are met.
It is important for companies not to sit on the sidelines, she advises. Get into the technology transformation game in some form. “Just start somewhere,” even if it is a small project, learn and adapt, and then go from there. “It does not need to be perfect. Perfection can be the enemy of success.”
The speed of technology innovation always has been rapid, and the advent of AI and automation is accelerating that even further, observes Jason Brenner, senior vice president of digital portfolio at FedEx. “We see that as an opportunity, rather than a challenge.”
He believes one of the industry’s biggest challenges is turning innovation into adoption, “ensuring new capabilities integrate smoothly into existing operations and deliver value quickly.” Brenner adds that in his view, “innovation is healthy and pushes everyone forward.”
Execution at scale is where the rubber meets the road. “Delivering technology that works reliably across millions of shipments, geographies, and constantly changing conditions requires deep operational integration, massive data sets, and the ability to test solutions in multiple environments,” he says. “That’s where FedEx is uniquely positioned.”
DEFYING AUTOMATION NO MORE
Before the arrival of the newest forms of AI, “there were shipping tasks that had defied automation for decades,” notes Mark Albrecht, vice president of artificial intelligence for freight broker and 3PL C.H. Robinson. “Humans had to do this repetitive, time-consuming—I might even say mind-numbing—yet essential work.”
Application of early forms of AI, such as machine learning tools and algorithms, provided a hint of what was to come. CHR, which has one of the largest in-house IT development groups in the industry, has been using those for a decade.
Large language models and generative AI were the next big leap. “It’s the advent of agentic AI that opens up new possibilities and holds the greatest potential for transformation in the coming year,” Albrecht says, adding, “Agentic AI doesn’t just analyze or generate content; it acts autonomously to achieve goals like a human would. It can apply reasoning and make decisions.”
CHR has built and deployed more than 30 AI agents, Albrecht says. Collectively, they have performed millions of once-manual tasks—and generated significant benefits. “Take email pricing requests. We get over 10,000 of those a day, and people used to open each one, read it, get a quote from our dynamic pricing engine, and send that back to the customer,” he notes. “Now a proprietary AI agent does that—in 32 seconds.”
Another example is load tenders. “It used to take our people upwards of four hours to get to those through a long queue of emails,” he recalls. That work is now done by an AI agent that reads the email subject line, body, and attachments; collects other needed information; and “turns it into an order in our system in 90 seconds,” Albrecht says. He adds that if the email is for 20 orders, “the agent can handle them simultaneously in the same 90 seconds,” whereas a human would have to handle them sequentially.
Time is money for the shipper at every step of the logistics process. So the faster a rate quote is provided, order created, carrier selected, and load appointment scheduled, the greater the benefits to the shipper. “It’s all about speed to market, which whether a retailer or manufacturer, often translates into if you make the sale or keep an assembly line rolling.”
LOOKING AHEAD
Strip away all the hype, and the one tech deliverable that remains table stakes for all logistics providers and their customers are platforms that provide a timely and accurate view into where goods are and with whom, and when they will get to their destination. “First and foremost is real-time visibility that enables customer access to the movement of their product across the supply chain,” says Penske Executive Vice President Mike Medeiros. “Then, getting further upstream and allowing them to be more agile and responsive to disruptions.”
As for AI, “it’s not about replacing [workers]; it’s about pointing them in the right direction and helping [them] get more done in the same amount of time, with a higher level of service and enabling a more satisfying work experience. It’s human capital complemented by AI-powered agents as virtual assistants. We’ve already [started] down that path.”
The new technology is now tracking down missed pickups and using advanced reasoning to determine how to keep freight moving. Those agents are also collecting and analyzing previously unavailable data that LTL carriers are now using to improve their technology, scheduling, and operations.
C.H. Robinson says it launched the initiative because with one truck carrying freight from up to 20 different shippers, LTL shipping requires complex coordination to pick it all up, take it to a terminal, and recombine it on other trucks with other freight heading the same direction. That complexity means that missed pickups and costly delays can ripple through LTL networks.
According to the company, the results are already in: 95% of checks on missed LTL pickups have been automated, saving over 350 hours of manual work per day. And unnecessary return trips to pick up missed freight have been reduced by 42%.
“A missed pickup isn’t just a minor inconvenience,” Greg West, Vice President for LTL, said in a release. “When a truck arrives and the freight or packaging isn’t ready, or the carrier couldn’t make it because they got stuck in traffic, it forces another truck to come back the next day. That might not even be our shipper’s freight, but it creates a domino effect for other freight that was supposed to get picked up and for all the other trucks down the line.”
The new agents join a fleet of more than 30 other AI agents that C.H. Robinson has already built for LTL. They include units that handle LTL price quotes, orders, freight classification, shipment tracking, and proof of delivery.
“We don’t just throw AI at anything and everything. It’s not a hobby for us. We use AI agents only where they can deliver tangible business results,” C.H. Robinson’s vice president for artificial intelligence, Mark Albrecht, said. “Our Lean AI processes helped us uncover the extent of time wasted in handling missed pickups and where artificial intelligence had the most potential to augment our automation software.”
Jan. 20, 2026 — The Argonne Leadership Computing Facility (ALCF) invites proposals for a new collaboration and development program, called APEX, designed to fast-track novel applications of AI in science. This program seeks proposals that apply AI methods in new, creative, or unconventional ways within their domain, such as introducing new AI methods or bringing […]
As AI models continue to get smarter, people can rely on them for an expanding set of tasks. This leads users—from consumers to enterprises—to interact with...
As AI models continue to get smarter, people can rely on them for an expanding set of tasks. This leads users—from consumers to enterprises—to interact with AI more frequently, meaning that more tokens need to be generated. To serve these tokens at the lowest possible cost, AI platforms need to deliver the best possible token throughput per watt. Through extreme co-design across GPUs, CPUs…
Agentic AI systems increasingly rely on collections of cooperating agents—retrievers, planners, tool executors, verifiers—working together across large...
Agentic AI systems increasingly rely on collections of cooperating agents—retrievers, planners, tool executors, verifiers—working together across large contexts and long time spans. These systems demand models that deliver fast throughput, strong reasoning accuracy, and persistent coherence over large inputs. They also require a level of openness that allows developers to customize, extend…
Validating AI systems requires benchmarks—datasets and evaluation workflows that mimic real-world conditions—to measure accuracy, reliability, and safety...
Validating AI systems requires benchmarks—datasets and evaluation workflows that mimic real-world conditions—to measure accuracy, reliability, and safety before deployment. Without them, you’re guessing. But in regulated domains such as healthcare, finance, and government, data scarcity and privacy constraints make building benchmarks incredibly difficult. Real-world data is locked behind…
AI innovation continues to be driven by three scaling laws: pre-training, post-training, and test-time scaling. Training is foundational to building smarter...
AI innovation continues to be driven by three scaling laws: pre-training, post-training, and test-time scaling. Training is foundational to building smarter models, and post-training—which can include fine-tuning, reinforcement learning, and other techniques—helps to further increase accuracy for specific tasks, as well as provide models with new capabilities like the ability to reason.
NVIDIA researchers on Friday won a key Kaggle competition many in the field treat as a real-time pulse check on humanity’s progress toward artificial general...
NVIDIA researchers on Friday won a key Kaggle competition many in the field treat as a real-time pulse check on humanity’s progress toward artificial general intelligence (AGI). Ivan Sorokin and Jean-Francois Puget, two members of the Kaggle Grandmasters of NVIDIA (KGMoN), came in first on the Kaggle ARC Prize 2025 public leaderboard with a 27.64% score by building a solution evaluated on…
On January 8, XPENG dropped some interesting news about its product roadmap that actually reveals something bigger than just two new extended-range models. The company is doing something different with extended-range EVs — instead of treating them as a workaround for charging infrastructure problems, XPENG is building them as electric-first ... [continued]
A new development on the Jersey Shore is signaling a shift in how and where AI infrastructure will grow. A subsea cable landing station has announced plans for a data hall built specifically for AI, complete with liquid-cooled GPU clusters and an advertised PUE of 1.25. That number reflects a well-designed facility, but it highlights an emerging reality. PUE only tells us how much power reaches the IT load. It tells us nothing about how much work that power actually produces.
As more “AI-ready” landing stations come online, the industry is beginning to move beyond energy efficiency alone and toward compute productivity. The question is no longer just how much power a facility uses, but how much useful compute it generates per megawatt. That is the core of Power Compute Effectiveness, PCE. When high-density AI hardware is placed at the exact point where global traffic enters a continent, PCE becomes far more relevant than PUE.
To understand why this matters, it helps to look at the role subsea landing stations play. These are the locations where the massive internet cables from overseas come ashore. They carry banking records, streaming platforms, enterprise applications, gaming traffic, and government communications. Most people never notice them, yet they are the physical beginning of the global internet.
For years, large data centers moved inland, following cheaper land and more available power. But as AI shifts from training to real-time inference, location again influences performance. Some AI workloads benefit from sitting directly on the network path instead of hundreds of miles away. This is why placing AI hardware at a cable landing station is suddenly becoming not just possible, but strategic.
A familiar example is Netflix. When millions of viewers press Play, the platform makes moment-to-moment decisions about resolution, bitrate, and content delivery paths. These decisions happen faster and more accurately when the intelligence sits closer to the traffic itself. Moving that logic to the cable landing reduces distance, delays, and potential bottlenecks. The result is a smoother user experience.
Governments have their own motivations. Many countries regulate which types of data can leave their borders. This concept, often called sovereignty, simply means that certain information must stay within the nation’s control. Placing AI infrastructure at the point where international traffic enters the country gives agencies the ability to analyze, enforce, and protect sensitive data without letting it cross a boundary.
This trend also exposes a challenge. High-density AI hardware produces far more heat than traditional servers. Most legacy facilities, especially multi-tenant carrier hotels in large cities, were never built to support liquid cooling, reinforced floors, or the weight of modern GPU racks. Purpose-built coastal sites are beginning to fill this gap.
And here is the real eye-opener. Two facilities can each draw 10 megawatts, yet one may produce twice the compute of the other. PUE will give both of them the same high efficiency score because it cannot see the difference in output. Their actual productivity, and even their revenue potential, could be worlds apart.
PCE and ROIP, Return on Invested Power, expose that difference immediately. PCE reveals how much compute is produced per watt, and ROIP shows the financial return on that power. These metrics are quickly becoming essential in AI-era build strategies, and investors and boards are beginning to incorporate them into their decision frameworks.
What is happening at these coastal sites is the early sign of a new class of data center. High density. Advanced cooling. Strategic placement at global entry points for digital traffic. Smaller footprints but far higher productivity per square foot.
The industry will increasingly judge facilities not by how much power they receive, but by how effectively they turn that power into intelligence. That shift is already underway, and the emergence of AI-ready landing stations is the clearest signal yet that compute productivity will guide the next generation of infrastructure.
# # #
About the Author
Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.