int_img

Asetek Liquid Cooling Now in Servers: Part 2

Finally, what every breathing body is talking about – green and energy savings in data center. First of all this is a very complex market and topic of discussion and I do not believe there will be one single technology that will solve the power issues. Some are looking at making the servers themselves more energy efficient and some (like us) are trying to reduce the actual power needed for cooling the servers. Learning from our desktop cooling success we do not believe Asetek RackCDUany vendor of any technology will be successful in this space if their solution requires either: 1. A significant upfront capital investment, no matter what the TCO says and 2. A significant change in how the data center food chain does business today. Sure, people (including data center operators) worry about the environment and Total Cost of Ownership. But based on the conversations I have had, their willingness to put up a lot of money upfront is very limited.

Secondly, habits are very tough to brake and especially in a conservative segment. Convincing the food chain that their servers and racks now have to be built, installed and operated in another way is just a very tough thing to do and even the big guys do not seem able to do it. As such, when we designed our solution we wanted to make sure that it could be deployed on any server, any brand and in any rack. Nothing comes for free, but we also wanted to design the solution in a way where it becomes more or less cost neutral for the data center operator. Finally we wanted to make a big impact, meaning that we wanted to offer a significant energy saving (50% or better) compared to current state of the art technology.

I am happy to state that I think we achieved all of that. Rather than trying to solve world-hunger in one step we focused on where the problems are biggest. In any given dual processor or more server a rule of thumb is that under load the CPUs account for around 60% of the heat exhausted out of the servers and in to the room. With GPUs in the mix, this picture only gets worse. As CPU and GPU cooling experts we decided to come up with a simple solution for transporting all this CPU and GPU waste heat completely out of the building. As a result, the CRAC no longer has to cool 100% of the heat from the servers, but now only 40%. As cooling energy has a linear relationship to the amount of heat removed, the energy needed for cooling the servers is also only 40%, a 60% savings.

We found out that because CPUs have a relatively high allowable operating temperature, in most geographies outside air ambient temperature is sufficient to cool water to a temperature that keeps CPUs within spec. What this means is that instead of having to spend money to purchase and operate water chillers, we only cool the water with a fan and heat exchanger outside the building. No power besides a circulation pump and the fan is used to cool the water. (Remember the CRAC units—now running at much lower power are keeping the rest of the server and equipment in the data center nice and cool.)

The way it works on the server level is that on each CPU (or GPU) we mount one of our integrated pump and cold plates. Instead of using an internal radiator (HEX), we route the liquid tubing to the back of the server and we mount a pair of quick connect fittings via a PCI slot cover or another hole in the chassis. Turning to the rack level (see picture), we bring a pair of tubes with quick connects from the RackCDU™ (Coolant Distribution Unit) installed in the rack and attach them to the quick connectors on the server. Now we have a liquid connection between all servers and the RackCDU™ that can be taken apart by quick connects without spilling liquid. There is no change made to the design of the server (our pump/cold plates simply replace the normal air heat sinks), and the RackCDU™ is installed in a small extension that can be mounted on any rack, or designed in to the rack from the factory. NO U-space is consumed. The reliability is very high as there are redundant pumps in each server meaning that if a pump dies it will not affect the individual server or the entire rack.

The RackCDU™ containing all the liquid from the servers are connected to outside (facilities) water through liquid-to-liquid heat exchangers, meaning that the liquid in the facilities loop and the liquid in the server loops are physically separated. The server loops and the server side of the RackCDU™ come filled and sealed from our factory and with enough fluid for its lifetime. A lot of data centers already have some sort of facility water, so the only change we impose is that water needs to be routed to each rack. That can be done under raised floors in data centers that already have them or in the ceiling (in conjunction with sprinkler systems) and no raised floors or the like is needed.

Each server and rack will see a small incremental cost compared to pure air cooling. No free lunches. However, as the data center now needs a 60% less capacity CRAC, the total investment for cooling the data center will be around neutral. On top of that – with pumping and fan power taken in to account the net energy saving on the cooling power will go down at least 50%. Our solution will be profitable and environmentally friendly from the get go.

Don’t forget to share

Relevant articles

RackCDU™ Demonstration Videos

tile_arrow
Asetek Joins the Green Grid

tile_arrow
Benefits of Asetek’s Data Center Liquid Cooling

tile_arrow
Liquid Cooling Delivering on the Promise

tile_arrow