Density and interconnect optimization is key to obtaining the computational power needed for Artificial Intelligence and HPC. With applications emerging to solve a range of challenges, these systems can have a diverse set of requirements when considering thermal solutions.
AI & HPC
Artificial Intelligence (AI) is transforming HPC. Given the critical research these systems are tasked to perform, these high-density clusters are expected to run at 100% utilization for sustained periods, making cooling performance even more critical.
With more HPC-style server configurations moving into traditional air-cooled data centers, data center operators must grapple with not only the increased power densities but also the thermal challenges that they present.
Cutting edge AI and HPC clusters require not only the highest performance versions of the latest CPU and GPU offerings from Intel, AMD and NVIDIA, but all packaged in dense configurations.
This translates to substantially higher wattage densities at both the node-level and rack-level to minimize the interconnect distance between servers both in and between racks.
Air heat sinks struggle to handle the heat to maintain maximum throughput. Reducing rack densities results in greater latency and lower overall throughput.
Liquid enables efficient cooling of high wattage CPUs, GPUs and memory without sacrificing density for performance. Liquid cooling also improves reliability and allows for future proofing of racks for even higher kW servers and blades.