WEB 2.0 - IBM turns server sideways for Web 2.0 build-out

IBM has designed a new type of rack-mount server specifically for companies running heavily trafficked "Web 2.0" sites.

IBM has designed a new type of rack-mount server specifically for companies running heavily trafficked "Web 2.0" sites such as Facebook and MySpace, the company announced this week.

Called the iDataPlex, the server is designed to compete with the unbranded "white-box" PCs that online companies link together by the thousand to run busy Web sites. IBM said its new server, which runs Linux and is based on Intel's quad-core Xeon processors, consumes 40 percent less power and packs more computing punch than a typical rack-mount system. The energy savings come largely from a new design that requires less power for cooling, IBM said.

Rack servers are the slender machines shaped like an oblong pizza box and stacked on top of each other in server chassis. The servers come in standard heights -- 1u or 2u -- but their depth, or how far back they reach into the chassis, has been expanding as vendors try to cram more hardware components inside.

That has created a problem, according to IBM. Cooling systems blow air over the servers from back to front, and as the servers become deeper it takes more energy to power the fans that cool them. "The power used by the fan is proportional to the cube of the fan speed, so if you want to double the fan speed you have to use eight times the power," said Gregg McKnight, CTO of IBM's modular systems group.

IBM's answer is to rotate the server horizontally through 90 degrees, producing a server that is wider than usual but only 15 inches deep, compared to about 25 inches for a typical rack server. "That allowed us to run fans at a much lower velocity, and therefore save about 67 percent on the fan energy alone," he said.

IBM also pushed two racks together, creating a single wide rack that holds 84 iDataPlex servers. That allowed it to share three power whips between the servers, while two separate racks would normally use four. Power whips, the moveable outlets attached to power cables, cost US$1,500 to $2,000 per month to maintain, McKnight said.

The broad surface area at the back also allowed IBM to design an optional water-cooled rear-door heat exchanger, which IBM said extracts all of the heat from the system, so it doesn't contribute to datacenter warming.

The trade-off for sharing power cables is a less fault-tolerant system, but the software used to run busy Web sites is usually designed to fail over quickly to another server. "We interviewed Web 2.0 companies and they told us unanimously that they are designing their applications to tolerate server failures. So because it's more economical and more energy efficient, it's an attractive trade-off for them," McKnight said.

More about Avocent AustraliaIBM AustraliaIntelLinuxQLogicSpeedSystems Group

ARN Directory | Distributors relevant to this article

Comments

Comments are now closed

 

Latest News

04:00PM
Over a quarter of Australians won’t bank on their mobile: Kaspersky
12:30PM
HDS A/NZ Partner Summit 2014 in Noosa (+15 photos)
12:28PM
The Google shakeup continues: Andy Rubin is out
12:09PM
Impact Systems boss wants to buy 10 resellers
More News
05 Nov
LIVE Webcast: Lessons Learned from the Biggest Security Breaches
05 Nov
vForum 2014
10 Nov
Ascom Myco Launch Event
11 Nov
DCIM Certified Solutions Professional
View all events