Direct Contact Liquid Cooling Adoption

HPC and hyperscale markets turn to new cooling solutions.

The CoolIT Rack DCLC CHx80 heat exchanger (CDU).


Sponsored ContentBy Patrick McGinn, VP Product Management, CoolIT Systems Inc.

The high performance computing (HPC) and hyperscale markets epitomize the leading edge of data center design and operation. To manage increasingly complex workloads, these high performance data centers are adopting the newest generation of chip architecture, IT hardware, software and supporting infrastructure. These ecosystem attributes are what is driving the industry to adopt Direct Contact Liquid Cooling (DCLC™) as their next-generation cooling strategy.

Who is Using Liquid Cooling?

Hyperscale data center operators such as Microsoft, Baidu, Google, Alibaba and Facebook build some of the largest data centers in the world and, therefore, stand to gain the most from improving their total cost of ownership (TCO) models. These groups are usually in the driver’s seat when designing, building and operating their own facilities, so they are poised to adopt and take advantage of emerging technologies quicker and more aggressively than other data center groups.

The CoolIT Rack DCLC CHx80 heat exchanger (CDU). The CoolIT Rack DCLC CHx80 heat exchanger (CDU).

HPC users, on the other hand, need to build out parallel processing environments that can efficiently and reliably manage computationally intensive applications. These systems rely on the highest-performing CPU, GPU, memory and interconnect technologies. As these HPC groups push the boundaries of IT hardware, the resulting server and rack densities make liquid cooling a natural fit for this segment.

Why Liquid Cooling?

Liquid has been present in data centers for many years. In fact, chilled water systems have made their way deeper into the white space over time; starting with perimeter cooling and moving into in-row coolers, overhead coolers and rear-door heat exchangers. As the liquid moves closer to the IT, the net effect has been a reduction in the amount of energy required to distribute the air and, thereby, provide better cooling efficiencies. While the HPC and hyperscale markets have benefited from improvements using these “close-coupled” air cooling systems, the rack densities being deployed today simply outstrip the air cooling capacity available to the racks.

There are several liquid cooling technologies that have gained popularity as users look for a cooling strategy that does not involve packing more air-conditioning into their data center. Direct Contact Liquid Cooling, single-phase immersion cooling and two-phase immersion cooling are the most common forms of liquid cooling, and have been productized over the past five to 10 years. Each flavor of liquid cooling usually carries the same common value propositions:

  • Performance: Liquid cooling allows for significant increases in processor clock speed. This notion of hardware acceleration has existed for years, but is now being built into servers for specific industry applications (i.e. high frequency trading, electronic design automation, etcetera). Enabling these performance gains means delivering more and more power to the processor. Knowing that increased power consumption is directly related to a need for heat dissipation, the substantial increases in processor performance in recent years must be met by an equally aggressive strategy for heat dissipation.
  • Density: By removing the high density heat with liquid, server manufacturers can pack more high performance components into each server. Where air cooling limitations meant large, bulky heat sinks, today’s HPC and hyperscale users are seeing incredible workloads completed in small 1U and blade form factors. Finally, where legacy data center cooling meant placing only a few of the high performance servers in a rack, liquid cooling allows for a considerable increase in rack utilization, achieving more computing in the same footprint.
  • Efficiency: The characteristics of liquid for heat absorption and transportation are so inherently superior to air, tremendous energy efficiencies can be realized for data center operators. This presents “low hanging fruit” opportunities to move from mechanical chilling processes to a passive cooling infrastructure that provides equivalent or superior IT cooling abilities. Where air cooling of all types requires expensive and bulky air-conditioning and air-handling equipment, well-thought-out liquid cooling technologies require very little. Using coldplate fluid temperatures of up to 113F/45C—American Society of Heating Refrigerating & Air Conditioning (ASHRAE) liquid cooling class W4—means that equipment such as dry-coolers and cooling towers can be used to cool a higher percentage of the data center, which equals meaningful operating expenditure (OPEX) and capital expenditure (CAPEX) savings. Liquid cooling provides further energy savings by requiring fewer fans in servers.
Though they carry similar value propositions, each of the different liquid cooling technologies should be considered for their fit within a particular data center environment. Common considerations include changes to user behavior, server manufacturer warranty and reliability. Given the complexity and cost of today’s data centers, a successful liquid cooling technology should fit within the constructs of the current IT and be adaptable to modern data center environments.

Liquid immersion systems are attractive, as they cool 100% of the IT within the cooling fluid. This is, unfortunately, counterbalanced in both single and two-phased immersion by requiring major modifications to the data center infrastructure, servers and normal operator practices. Immersion systems do not leverage the standing infrastructure, and everything must be replaced with large horizontal tanks of the cooling fluid.

Direct Contact Liquid Cooling

Of the technologies previously listed, Direct Contact Liquid Cooling has become the most widely adopted choice for end users, given its ability to fit on common server architectures, exist within current rack constructs, use existing facility cooling infrastructure, and its availability as a factory-installed option from server original equipment manufacturers (OEMs).

A stroll through HPC tradeshows or a search online illustrates the growing popularity of Direct Contact Liquid Cooling. DCLC is the practice of using liquid-cooled coldplates (rather than air-cooled heatsinks) that sit directly against high density power components. Given the proximity to the heat source and efficiency of most DCLC coldplates, warm liquid as high as 113°F (45°C) can be used to cool the IT components. The energy absorbed in the fluid is more easily and cost-effectively removed from the data center, where it can be used to warm buildings, included in any number of heat recapture strategies or exhausted with passive cooling towers.

The cold plates housed within the servers are the first of three modules used in a DCLC system. The other two modules are the rack manifold and coolant distribution unit (CDU). Rack manifolds are a simple rack attachment—think of a vertical power distribution unit at the back of a rack—that provides a supply and return liquid connection to each server. Rack manifolds are, in turn, connected to the CDU, which provides the liquid pumping and interfaces with the building’s water system to exchange the heat gathered from inside the servers. The CDU’s job is to use the facility water supply to cool the fluid that is being circulated back to the coldplates. The CDU should also be capable of managing fluid pressure and flow rates, raising alarms to pre-set conditions, and communicating with the data center Building Management System. A CDU can be rack-mounted and service up to 120 servers or be row-based and service 700 or more servers.

Key Direct Contact Liquid Cooling Considerations

While there are many features to any Direct Contact Liquid Cooling system, there are a few notable considerations of which any data center owner or operator should be aware. The first is pumping architecture. Centralized pumping is the most common fluid pumping architecture, whereby fluid flow for the manifolds and server coldplates is provided from pumps that reside within the CDU. Having pumps integrated within the CDU abstracts all the moving components to the CDU alone. There are no points of failure hidden within the server, and coldplates are never removed until the server is retired.

In comparison, distributed pumping architectures have a pump integrated on each coldplate that represents multiple additional points of failure within each server. For example, a data center with 10,000 servers would have 20,000 pumps to monitor and maintain. The second major consideration is the quick disconnects that are used to attach the server liquid loops to the rack manifold. These connection points are critical, as a server must remain easily and safely removable from the rack for service. Professional DCLC systems use 100% dripless metal quick disconnects that can handle 5000 connections cycles while maintaining structural integrity over an extended period of time in warm water environments.

Dell PowerEdge C6420 Server

As processor manufacturers push the boundaries on performance and power, and data center owners strive for reduced TCO, HPC and hyperscale users will be seeing more Direct Contact Liquid Cooling in their data centers. The Dell EMC PowerEdge C6420 server (see page 25) represents the latest in DCLC technology and is now available to Dell EMC customers with factory-installed DCLC coldplates. These customers may select from many of CoolIT’s rack manifolds and CDUs to complete the liquid cooling solution. HPC

Patrick McGinn leads the CoolIT product management and product marketing groups in developing off-the-shelf and custom liquid cooling solutions for HPC and hyperscale markets.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#16624