As many people in the worldwide HPC family head to Frankfurt Am Main in Germany for ISC HIGH PERFORMANCE 2017, it is a good time to look at a trend that will be talked about not only at ISC, but also at SC17 in November.
HPC conferences focus their tracks on the application of HPC to its critical place in the sciences, finance, logistics, large manufacturing, oil and gas. And new tracks are being added to address the growing penetration of HPC in decision support and big data across the segments of traditional enterprise computing.
However, in the side conversations at sessions and in booths on the exhibition floor, the discussions often turn to practical concerns related to providing the computing power needed. In 2017, there is a notable heightened concern over wattage trends in CPUs and GPUs required to meet accelerating computational demands in HPC clusters. Specifically, the resulting heat and its impact on node, rack and cluster heat density arriving like a meteor with Intel’s Knight’s landing, Nividia’s P100 and the latest versions of Intel’s Skylake processor.
It is important to note that this is an inflection point more than simply an extension of trends seen previously. The wattages coming are so high that to cool the nodes containing the highest performance chips used in HPC leaves one with little choice other than liquid cooling to maintain reasonable rack densities. If not addressed at the node level with liquid cooling, floor space build-outs and data center expansions are required. With the ability of its distributed cooling architecture to address site needs in a variety of heat rejection scenarios, it can be argued that the compute-wattage-inflection-point is a major driver in the accelerating global adoption of Asetek liquid cooling at HPC sites and by the OEMs that serve them.
Asetek’s distributed pumping architecture is based on low pressure, redundant pumps and closed loop liquid cooling within each server node. This approach allows for a high level of flexibility.
Asetek ServerLSL is a server-level liquid enhanced air cooling solution. It can be viewed as a transitional stage in the introduction of liquid cooling or as a tool to enable the immediate incorporation of the highest performance computing nodes that require liquid cooling into the data center. ServerLSL allows the site to leverage existing HVAC, CRAC and CRAH units with no changes to data center cooling. ServerLSL replaces less efficient air coolers in the servers with redundant coolers (cold plate/pumps) and exhausts 100% of this hot air into the data center. This enables high wattage server nodes to have 1U form factors and maintain high cluster rack densities. At a site level, all the heat is handled by existing CRACs and chillers with no changes to the infrastructure. With SeverLSL, liquid cooled nodes can be mixed in racks with traditional air cooled nodes.
While ServerLSL isolates the system within each server, Asetek’s RackCDU systems are rack-level focused enabling a much greater impact on cooling costs of the datacenter overall. Yet RackCDU systems leverage exactly the same liquid pumps and coolers used with ServerLSL nodes. RackCDU is what is used by all of the current worldwide sites in the TOP500 using Asetek liquid cooling.
Asetek RackCDU provides the answer both at the node level and for the facility overall. As with ServerLSL, RackCDU D2C (Direct-to-Chip) utilizes redundant pumps/cold plates atop server CPUs & GPUs (and optionally other high wattage components like memory). They collect heat and move it via a sealed liquid path to heat exchangers in the RackCDU for transfer into facilities water. RackCDU D2C captures between 60% and 80% of server heat into liquid, reducing data center cooling costs by over 50% and allowing 2.5x-5x increases in data center server density.
The remaining heat in the data center air is removed by existing HVAC systems in this hybrid liquid/air approach. When there is unused cooling capacity available, data centers may choose to cool facilities water coming from the RackCDU with existing CRAC and cooling towers.
The high level of flexibility in addressing cooling at the server, rack, cluster and site levels provided by Asetek distributed pumping is lacking in approaches that utilize centralized pumping. This approach continues to deliver flexibility in the areas of heat capture, coolant distribution and heat rejection.
As HPC requirements in 2017 and beyond continue to affirm the need for more efficient cooling, Asetek continues to demonstrate global leadership in liquid cooling as its OEM and Installation bases grow. To learn more about Asetek liquid cooling, stop by booth J-600 at ISC17 in Germany.
Appointments for in-depth discussions about Asetek’s data center liquid cooling solutions at ISC17 may be scheduled by sending an email to email@example.com.