IBRS advisor, Dr Kevin McIsaac, claims channel partners helping clients refurbish a city datacentre should be posing some searching questions before any work is undertaken.
“One of the first things they should do is politely or appropriately quiz the customer about why they even own their own datacentre,” he said. “A lot of people do it because that is what they are used to and they feel comfortable with it. However, if you own a small datacentre in the city it is pretty expensive; floor space is expensive and given the massive improvements in networking over recent years the question you need to ask yourself is ‘should I really be in the business of providing my own datacentre?’.
“If they decide not to then some resellers and integrators will actually have spaces they can rent out and they certainly need to help those people do that in a way that works for them.”
Yet, for those who decide to maintain their own facility, McIsaac highlighted several trends that should be considered. First was the shift to virtualised servers and the way it can be used to consolidate physical units. Monitoring peak times to identify periods when unutilised servers can be powered down is another power-saving measure.
He noted, however, the commensurate problems of high power and heat densities.
“A lot of datacentres are just not designed for the heat and power capacities of the racks that people are building today, particularly the blades,” McIsaac said. “What I am hearing is that these things are just sucking up juice like you wouldn’t believe.”
To help, he advised doing an audit of the datacentre to ensure the best layout – using hot and cold aisle cooling – and trying to get away from Computer Room Air Conditioning units [CRACs] as they are expensive and relatively inefficient.
“Increasingly I am hearing people say they are not bothering with raised flooring any more,” McIsaac said. “It was a bit of a surprise to me but what some of the smarter folks are saying is raised floors were important in the old days because you had these massive cables that you had to drag across the floor both power and networking. Now you can afford to run the cables overhead.”
Those channel partners with clients in appropriate geographic locations also should consider passive cooling.
“The idea behind that, particularly down in Melbourne, Canberra and Tasmania, is that for a large proportion of the year overnight temperatures are very low – so customers are asking ‘why do I need air conditioning at all? Why can’t I suck the cool air from outside and take the hot air from inside and pipe it out? Can I actually use the hot air from the datacentre for something useful?’”
McIsaac also pointed out the inherent difficulty any refurbishment or datacentre design faces.
“The fundamental problem we have is that datacentres are built for about a 25-year lifecycle,” he said. “Your typical servers have a three-year lifecycle. In every new generation, the power density and heat density has gone up dramatically. So you have these two cycles of innovation which are grossly out of step.
“I think the only solution to that, if you are actually going to build your own datacentre, is to recognise the building infrastructure might have 20 years but the power distribution and networking infrastructure might have a vastly shorter five to seven-year cycle, so all of it needs to be modularised.”