Cutting edge AI and HPC clusters require not only the highest performing versions of the latest CPU and GPU offerings from Intel, AMD and NVIDIA, but all packaged in dense configurations. This translates into substantially higher wattage densities at both the node and the rack level. Data centers around the globe are finding that liquid cooling is becoming a necessity as air is no longer a reliable solution. Asetek's data center technologies bring advances in performance and increased densities.
Air heat sinks struggle to handle the heat to maintain maximum throughput. Reducing rack densities results in greater latency and lower overall throughput.
Liquid enables efficient cooling of high-wattage CPUs, GPUs and memory without sacrificing density for performance. Liquid cooling also improves reliability and allows for future proofing of racks for even high kW servers and blades.
Asetek liquid coolers are a drop-in replacement for air heat sinks within server nodes. Designed to fit within a 1U chassis, this allows for maximum densities at both the node-level and rack-level to minimize the interconnect distance between the servers both in and between the racks.
Our solutions are based on low pressure, redundant pumps and sealed liquid path cooling within each server node.
Our Direct-to-Chip (D2C) liquid cooling provides a distributed pumping architecture to address the full range of heat rejection scenarios in air- and liquid-cooled data centers. We offer a highly flexible, reliable and proven liquid cooling platform to efficiently cool the latest server designs and help users manage the transition from an air cooled to liquid cooled environment.
Direct to Chip (D2C) liquid cooling technology dramatically increases data center density and enables maximum sustained CPU throughput.
D2C liquid cooling provides a distributed cooling architecture to address the full range of heat rejection scenarios in air-cooled and liquid-cooled data centers.
Our Direct-to-Chip (D2C) technology is cooling some of the fastest supercomputers in the world – including high profile GPU-enabled AI and HPC installations.
Density and interconnect optimization is key to obtaining the computational power needed for AI and HPC. With applications emerging to solve a range of challenges, these systems can have a diverse set of requirements when considering thermal solutions.
AI & HPC
Artificial intelligence (AI) is transforming HPC. Given the critical research these systems are tasked to perform, these high density clusters are expected to run at 100% utilization for sustained periods, making cooling performance even more crucial.
With more HPC-style server configurations moving into traditional air-cooled data centers, data center operators must grapple with not only the increased power densities but also the thermal challenges they present.