A migration from air to direct liquid cooling is simply the only option that can address surging data centre energy costs and allow the power densities of servers to continue to increase into the next decade. It will be too expensive not to adopt it. And it's coming sooner than you might think.
If it were up to engineers, direct liquid cooling would have been here five years ago, 25-year IBM veteran, Roger R. Schmidt, said. Schmidt is a distinguished engineer with experience designing water-cooled mainframes. He expects distributed systems to follow in the mainframe's footsteps.
Some data centre managers may not fully grasp the problem, because over the past eight years, server performance has increased by a factor of 75 while performance per watt of power has increased 16 times, according to HP. But data centres aren't using fewer processors - they're using more than ever. Meanwhile, the power density of equipment has increased to the point where power and cooling systems vendor, Liebert, is supporting clients with state-of-the-art server racks exceeding 30 kilowatts (kW).
That creates two problems. First, energy costs are spiralling upward. Many data centre managers don't see that today because their power use isn't metered separately and isn't part of the IT budget. As costs rise, that's likely to change, forcing IT to retrofit data centres to the new reality.
Second, all that energy gets converted to heat. If you want to know what the heat coming off a 30kW rack feels like, turn your oven on full blast and open the door. That's 3.4kW. Now imagine jamming nine ovens, all running full tilt, into the confines of a single rack in your data centre and trying to maintain the internal temperature at or below 23 degrees Celsius. Manager of environmental application engineering at Liebert, Dave Kelley, said current air-cooling technologies can perhaps handle racks in the "mid-30s". But equipment vendors say that 50kW racks could be a reality within five years.
Christian Belady, a distinguished engineer at HP, is passionate about educating data centre managers about the problem and establishing standards for liquid-cooled data centres.
"If you look at the energy costs associated with not driving toward density and taking advantage of these densities, there will be huge penalties from an efficiency standpoint," Belady said.
But all that heat will have to be removed from the data centre, which is one reason why data centre infrastructure costs per server have risen. In fact, while the cost of server hardware has remained flat or declined slightly, Belady estimated that the cost of the data centre infrastructure to support a server over a three-year life span exceeded the hardware cost back in 2003.
This year, the cost of energy (power and cooling) required per server, amortised over that same three years, has pulled even with the equipment cost. By 2008, it will surpass it, becoming the single largest component of server TCO.
That's where liquid cooling comes in. Direct cooling of servers by piping liquid refrigerant or chilled water directly to components within racks is far more efficient than using air and will become a requirement.
How soon? Liebert's Kelley said his company has projects under way with IT equipment vendors that he can't discuss. But he predicted "within a couple of years, somebody will have something where you can plug [a line containing liquid coolant] directly into a processor". More effi cient designs could substantially cut cooling costs, which today can account for more than half of data centre energy use. Best practices and optimisations of existing infrastructure can bring immediate savings.
On racks approaching 30kW, users are turning to spot-cooling systems that run liquid refrigerant or chilled water to a heat exchanger that blows cool air from directly above or adjacent to server racks. That's more efficient than room air-conditioning units because the chilled air travels a shorter distance. These designs pipe liquid coolant, already used by computer room air-conditioning units at the outer edges of the data centre, up to the racks themselves. It's not hard to imagine extending those lines into the racks to deliver direct liquid cooling. The heat exchanger goes away, perhaps replaced in an IBM BladeCenter chassis with a hookup that accepts a chilled water or liquid refrigerant feed.
Today, spot-cooling systems typically require ad hoc copper piping overhead or under the floor to reach individual racks. As more and more racks require such cooling, data centre managers face a potential mess. What's worse, since few standards exist, things as basic as liquid coolant specifications and pipe couplings remain proprietary. Belady is pushing for common standards. "If we wait," he said, "everything is going to be much more proprietary, and when that happens, you lose the opportunity for interoperability".