According to IDC, computer support infrastructure needed to house and run servers is second only to system price among the concerns of datacenter managers. Research vice-president for high-performance computing at IDC, Steve Conway, said these issues were as far down the list as number 12 position just three to four years ago. JOHN E. WEST offers six pointers that should help you drive datacentre infrastructure projects in the right direction.
1. Decide whether customers really need their own datacentre. Growing computing infrastructure is a challenging, expensive process. Minimally robust infrastructure is going to include power-switching equipment and generators. But almost no one stops there. Added fault tolerance includes batteries or fly-wheels for the UPS, reserve water supplies, redundant components, and possibly even multiple independent commercial power connections.
Before committing to the next upgrade, ask the customer whether they need their own datacentre? Datacentre hosting could be an alternative option.
2. Weigh the costs and benefits of green design. Items like transformers, electrical wiring, cooling and UPS can have large, fixed electrical losses. The Green Grid, a consortium of information technology companies interested in improving datacentre energy efficiency, recommends right-sizing infrastructure by eliminating redundant components, installing only the equipment customers need to make their datacentre run today. According to the group's Guidelines for Energy-Efficient Data Centers, right-sizing the infrastructure can save as much as 50 per cent off the electricity bill.
3. Closely coupled cooling. About 30 per cent of the power that goes into the datacentre is turned into heat inside servers. The traditional approach to cooling puts large chillers outside the facility to cool water, which is then pumped to computer room air-conditioning (CRAC) units on machine room floors.
The concept of "closely coupled cooling" is to put cooling near the source of the heat it is meant to remove. This approach allows for targeted cooling and control of hot spots, and can result in shorter air paths that require less fan power to move the cold air around the room. For example, there are closely coupled cooling designs that install cooling in a rack form factor alongside server racks, or place it at the top of each rack for a "top to bottom" approach. There are also solutions that deliver chilled water directly to the rear door of racks, or alternate coolers in drawers inside racks alternating with drawers of computers.
4. Think about the floor tiles. Plan to minimise the profile of cables and pipes customers put under the raised floor in the machine room. This is the space CRAC units are using to push cold air towards computers, and the effectiveness of energy used in cooling can be greatly increased if you can minimise the interruptions that air encounters. Minimising under-floor obstructions can also help eliminate datacentre hot spots.
5. Move support equipment outside. Properly sitting computer infrastructure support systems may improve efficiency and make it easier to expand capacity in the future.
6. Monitor power management. How much power is being used? Are servers pulling more or less electricity than the vendor specs say they should? How close to power capacity will that next machine upgrade put a customer? Actively managing and monitoring energy usage will help future needs and show the effectiveness of steps taken to improve their datacentre's efficiency.
- JOHN E. WEST is the director of the Department of Defense High Performance Computing Center at the US Army Engineer Research and Development