❌

Normal view

Received before yesterday

Crypto mines are turning into AI factories

The pursuit of training ever-larger generative AI models has necessitated the creation of a new class of specialized data centers β€” facilities that have more in common with high-performance computing (HPC) environments than traditional enterprise IT.

These data centers support very high rack densities (130 kW and above with current Nvidia rack-scale systems), direct-to-chip liquid cooling, and supersized power distribution components. This equipment is deployed at scale, in facilities that consume tens of megawatts. Delivering such dense infrastructure at this scale is not just technically complicated β€” it often requires doing things that have never been attempted before.

Some of these ultra-dense AI training data centers are being built by well-established cloud providers and their partners β€” wholesale colocation companies. However, the new class of facility has also attracted a different kind of data center developer: former cryptocurrency miners. Many of the organizations now involved in AI infrastructure β€” such as Applied Digital, Core Scientific, CoreWeave, Crusoe and IREN β€” originated as crypto mining ventures.

Some have transformed into neoclouds, leasing GPUs at competitive prices. Others operate as wholesale colocation providers, building specialized facilities for hyperscalers, neoclouds, or large AI model developers like OpenAI or Anthropic. Few of them operated traditional data centers before 2020. These operators represent a significant and recent addition to the data center industry β€” especially in the US.

A league of their own

Crypto mining facilities differ considerably from traditional data centers. Their primary objective is to house basic servers equipped with either GPUs or ASICs (application-specific integrated circuits), running at near 100% utilization around the clock to process calculations that yield cryptocurrency tokens. The penalties for outages are direct β€” fewer tokens mean lower profits β€” but the hardware is generally considered disposable. The business case is driven almost entirely by the cost of power, which accounts for almost all of the operating expenditure.

Many crypto mines do not use traditional server racks. Most lack redundancy in power distribution and cooling equipment, and they have no means of continuing operations in the event of a grid outage: no UPS, no batteries, no generators, no fuel. In some cases, mining equipment is located outdoors, shielded from the rain, but little else.

While crypto miners didn’t build traditional data center facilities, they did have two crucial assets: land zoned for industrial use and access to abundant, low-cost power.

Around 2020, some of the largest crypto mining operators began pivoting toward hosting hardware for AI workloads β€” a shift that became more pronounced following the launch of ChatGPT in late 2022. Table 1 shows how quickly some of these companies have scaled their AI/HPC operations.

Table 1 The transformation of crypto miners

Table: The transformation of crypto miners

To develop data center designs that can accommodate the extreme power and cooling requirements of cutting-edge AI hardware, these companies are turning to engineers and consultants with experience in hyperscale projects. The same applies to construction companies. The resulting facilities are built to industry standards and are concurrently maintainable.

There are three primary reasons why crypto miners were successful in capitalizing on the demand for high-density AI infrastructure:

  • These organizations were accustomed to moving quickly, having been born in an industry that had to respond to volatile cryptocurrency pricing, shifting regulations and fast-evolving mining hardware.
  • Many were already familiar with GPUs through their use in crypto mining β€” and some had begun renting them out for research or rendering workloads.
  • Their site selection was primarily driven by power availability and cost, rather than proximity to customers or network hubs.

Violence of action

Applied Digital, a publicly traded crypto mining operator based in North Dakota, presents an interesting case study. The state is one of the least developed data center markets in the US, with only a few dozen facilities in total.

Applied Digital’s campus in Ellendale was established to capitalize on cheap renewable power flowing between local wind farms and Chicago. In 2024, the company removed all mentions of cryptocurrency from its website β€” despite retaining sizable (100 MW-plus) mining operations. It then announced plans to build a 250 MW AI campus in Ellendale, codenamed Polaris Forge, to be leased by CoreWeave.

The operator expects the first 100 MW data center to be ready for service in late 2025. The facility will use direct liquid cooling and is designed to support 300 kW-plus rack densities. It is built to be concurrently maintainable, powered by two utility feeds, and will feature N+2 redundancy on most mechanical equipment. To ensure cooling delivery in the event of a power outage, the facility will be equipped with 360,000 gallons (1.36 million liters) of chilled water thermal storage. This will be Applied Digital’s first non-crypto facility.

The second building, with a capacity of 150 MW, is expected to be ready in the middle of 2026. It will deploy medium-voltage static UPS systems to improve power distribution efficiency and optimize site layout. The company has several more sites under development.

Impact on the sector

Do crypto miners have an edge in data center development? What they do have is existing access to power and a higher tolerance for technical and business risk β€” qualities that enable them to move faster than much of the traditional competition. This willingness to place bets matters in a market that is lacking solid fundamentals: in 2025, capital expenditure on AI infrastructure is outpacing revenue from AI-based products by orders of magnitude. The future of generative AI is still uncertain.

At present, this new category of data center operators appears to be focusing exclusively on the ultra-high-density end of the market and is not competing for traditional colocation customers. For now, they don’t need to either, as demand for AI training capacity alone keeps them busy. Still, their presence in the market introduces a new competitive threat to colocation providers that have opted to accommodate extreme densities in their recently built or upcoming facilities.

M&E and IT equipment suppliers have welcomed the new arrivals β€” not simply because they drive overall demand but because they are new buyers in a market increasingly dominated by a handful of technology behemoths. Some operators will be concerned about supply chain capacity, especially when it comes to large-scale projects: high-density campuses could deplete the stock of data center equipment such as large generators, UPS systems and transformers.

One of the challenges facing this new category of operators is the evolving nature of AI hardware. Nvidia, for example, intends to start shipping systems that consume more than 500 kW per compute rack by the end of 2027. It is not clear how many data centers being built today will be able to accommodate this level of density.


The Uptime Intelligence View

The simultaneous pivot by several businesses toward building much more complex facilities is peculiar, yet their arrival will not immediately affect most operators.

While this trend will create business opportunities for a broad swathe of design, consulting and engineering firms, it is also likely to have a negative impact on equipment supply chains, extending lead times for especially large-capacity units.

Much of this group’s future success hinges on the success of generative AI in general β€” and the largest and most compute-hungry models in particular β€” as a tool for business. However, the facilities they are building are legitimate data centers that will remain valuable even if the infrastructure needs of generative AI are being overestimated.

The post Crypto mines are turning into AI factories appeared first on Uptime Institute Blog.

Self-contained liquid cooling: the low-friction option

Each new generation of server silicon is pushing traditional data center air cooling closer to its operational limits. In 2025, the thermal design power (TDP) of top-bin CPUs reached 500 W, and server chip product roadmaps indicate further escalation in pursuit of higher performance. To handle these high-powered chips, more IT organizations are considering direct liquid cooling (DLC) for their servers. However, large-scale deployment of DLC with supporting facility water infrastructure can be costly and complex to operate, and is still hindered by a lack of standards (see DLC shows promise, but challenges persist).

In these circumstances, an alternative approach has emerged: air-cooled servers with internal DLC systems. Referred to by vendors as either air-assisted liquid cooling (AALC) or liquid-assisted air cooling (LAAC), these systems do not require coolant distribution units or facility water infrastructure for heat rejection. This means that they can be deployed in smaller, piecemeal installations.

Uptime Intelligence considers AALC a broader subset of DLC β€” defined by the use of coolant to remove heat from components within the IT chassis β€” that includes options for multiple servers. This report discusses designs that use a coolant loop β€” typically water in commercially available products β€” that fit entirely within a single server chassis.

Such systems enable IT system engineers and operators to cool top-bin processor silicon in dense form factors β€” such as 1U rack-mount servers or blades β€” without relying on extreme-performance heat sinks or elaborate airflow designs. Given enough total air cooling capacity, self-contained AALC requires no disruptive changes to the data hall or new maintenance tasks for facility personnel.

Deploying these systems in existing space will not expand cooling capacity the way full DLC installations with supporting infrastructure can. However, selecting individual 1U or 2U servers with AALC can either reduce IT fan power consumption or enable operators to support roughly 20% greater TDP than they otherwise could β€” with minimal operational overhead. According to the server makers offering this type of cooling solution, such as Dell and HPE, the premium for self-contained AALC can pay for itself in as little as two years when used to improve power efficiency.

Does simplicity matter?

Many of today’s commercial cold plate and immersion cooling systems originated and matured in high-performance computing facilities for research and academic institutions. However, another group has been experimenting with liquid cooling for more than a decade: video game enthusiasts. Some have equipped their PCs with self-contained AALC systems to improve CPU and GPU performance, as well as reduce fan noise. More recently, to manage the rising heat output of modern server CPUs, IT vendors have started to offer similar systems.

The engineering is simple: fluid tubing connects one or more cold plates to a radiator and pump. The pumps circulate warmed coolant from the cold plates through the radiator, while server fans draw cooling air through the chassis and across the radiator (see Figure 1). Because water is a more efficient heat transfer medium than air, it can remove heat from the processor at a greater rate β€” even at a lower case temperature.

Figure 1 Closed-loop liquid cooling within the server

Diagram: Closed-loop liquid cooling within the server

The coolant used in commercially shipping products is usually PG25, a mixture of 75% water and 25% propylene glycol. This formulation has been widely adopted in both DLC and facility water systems for decades, so its chemistry and material compatibility are well understood.

As with larger DLC systems, alternative cooling approaches can use a phase change to remove IT heat. Some designs use commercial two-phase dielectric coolants, and an experimental alternative uses a sealed system containing a small volume of pure water under partial vacuum. This lowers the boiling point of water, effectively turning it into a two-phase coolant.

Self-contained AALC designs with multiple cold plates usually have redundant pumps β€” one on each cold plate in the same loop β€” and can continue operating if one pump fails. Because AALC systems for a single server chassis contain a smaller volume of coolant than larger liquid cooling systems, any leak is less likely to spill into systems below. Cold plates are typically equipped with leak detection sensors.

Closed-loop liquid cooling is best applied in 1U servers, where space constraints prevent the use of sufficiently large heat sinks. In internal testing by HPE, the pumps and fans of an AALC system in a 1U server consumed around 40% less power than the server fans in an air-cooled equivalent. This may amount to as much as a 5% to 8% reduction in total server power consumption under full load. The benefits of switching to AALC are smaller for 2U servers, which can mount larger heat sinks and use bigger, more efficient fan motors.

However, radiator size, airflow limitations and temperature-sensitive components mean that self-contained AALC is not on par with larger DLC systems, therefore making it more suitable as a transitory measure. Additionally, these systems are not currently available for GPU servers.

Advantages of AALC within the server:

  • Higher cooling capacity (up to 20%) than air cooling in the same form factor and for the same energy input, offers more even heat distribution and faster thermal response than heat sinks.
  • Requires no changes to white space or gray space.
  • Components are widely available.
  • Can operate without maintenance for the lifetime of the server, with low risk of failure.
  • Does not require space outside the rack, unlike β€œsidecars” or rear-mounted radiators.

Drawbacks of AALC within the server:

  • Closed-loop server cooling systems use several complex components that cost more than a heat sink.
  • Offers less IT cooling capacity than other liquid cooling approaches: systems available outside of high-performance computing and AI-specific deployments will typically support up to 1.2 kW of load per 1U server.
  • Self-contained systems generally consume more energy than larger DLC systems for server fan power, a parasitic component of IT energy consumption.
  • No control of coolant loop temperatures; control of flow rate through pumps may be available in some designs.
  • Radiator and pumps limit space savings within the server chassis.

Outlook

For some organizations, AALC offers the opportunity to maximize the value of existing investments in air cooling infrastructure. For others, it may serve as a measured step on the path toward DLC adoption.

This form of cooling is likely to be especially valuable for operators of legacy facilities that have sufficient air cooling infrastructure to support some high-powered servers but would otherwise suffer from hot spots. Selecting AALC over air cooling may also reduce server fan power enough to allow operators to squeeze another server into a rack.

Much of AALC’s appeal is its potential for efficient use of fan power and its compatibility with existing facility cooling capabilities. Expanding beyond this to increase a facility’s cooling capacity is a different matter, requiring larger, more expensive DLC systems supported by additional heat transport and rejection equipment. In comparison, server-sized AALC systems represent a much smaller cost increase over heat sinks.

Future technical development may address some of AALC’s limitations, although progress and funding will largely depend on the commercial interest in servers with self-contained AALC. In conversations with Uptime Intelligence, IT vendors have diverging views of the role of self-contained AALC in their server portfolios, suggesting that the market’s direction remains uncertain. Nonetheless, there is some interesting investment in the field. For example, Belgian startup Calyos has developed passive closed-loop cooling systems that operate without pumps, instead moving coolant via capillary action. The company is working on a rack-scale prototype that could eventually see deployment in data centers.


The Uptime Intelligence View

AALC within the server may only deliver a fraction of the improvements associated with DLC, but it does so at a fraction of the cost and with minimal disruption to the facility. For many, the benefits may seem negligible. However, for a small group of air-cooled facilities, AALC can deliver either cooling capacity benefits or energy savings.

The post Self-contained liquid cooling: the low-friction option appeared first on Uptime Institute Blog.

❌