The man running one of Australia’s biggest datacentres knows a thing or two about the class systems and their associated benefits and challenges. Ask co-director of Brisbane-based Digital Sense, Michael Tran, about the confusing tier system and he’ll get frustrated – even hot under the collar – with the misinformation and lack of standards permeating the market.
Digital Sense is a high-density datacentre that caters to IT computing loads from 3kw -25kw per rack. Its main customers are a growing number of SMEs as well as key verticals including medical and finance.
Tran said there is no official Australian standard or central registry to classify tiers, which means players are able to make “ridiculous claims” about what constitutes a reliable datacentre.
Yet getting it right is so important given increasing demands for computing power and the explosive growth of the Internet. Virtualisation, consolidation and automation are also changing the face of the datacentre and resellers need to know how it affects the IT side (servers, storage and networking) along with the physical infrastructure (power and cooling).
ALL SHAPES AND SIZES
Datacentres come in all shapes and sizes, and are based on the floor size of the facility and the types of security and redundancy employed, according to IDC services research manager, Matthew Oostveen. In its simplest form is the server room, which is a secondary computer location usually under IT’s control. Next comes the localised datacentre, which could be either a primary or secondary location, and features some power and cooling redundancy to ensure constant temperatures.
Climbing up the food chain, and what we typically think of when someone says the word datacentre, are the mid-tier facilities, Oostveen said. These are the primary server location for an organisation consisting of a large room with superior cooling systems that are redundant and protected by levels of physical and digital security.
Finally, there’s the enterprise-class datacentre, which isn’t common in Australia. Once resellers have classifi ed the datacentre, they need to determine what tier it is – and that’s where things get a little tricky.
“My biggest challenge is dealing with the confusion surrounding the tier system and educating clients about it,” Digital Sense’s Tran said. “The big problem is most datacentres are creating their own set of tier standards, meaning reliability is in the eye of the beholder. It’s ridiculous.”
Tran is so worried about the state of the Australian market and the Wild West approach, he has joined the board of a standards organisation (The Australian Standard for Computer Accommodation) and is pushing for standards around site infrastructure functionality.
So what, if any, standards are in play today? Typically, the market tends to rely on the tiered classification approach developed by the US-based The Uptime Institute as a common benchmarking standard, which has been in use for 10 years.
The Institute categorises datacentres by the amount of downtime they experience using a numbered tier system from 1 to 4, with 4 being the best (see p22). In addition to the Institute, the Telecommunication Industry Association (TIA), an international standards organisation, also defines availability (uptime) from 1 to 4, with 4 being the best.
THE AUSTRALIAN LANDSCAPE
IDC’s Oostveen said one-third of enterprise datacentres and close to that size, are categorised as Tier 4, and another 34 per cent are Tier 3.
“This is quite impressive and a hallmark of the mature market in Australia,” he said.
But word on the street seems to tell a different story. Industry experts said many datacentres play in the Tier 2 and Tier 3 space, and only a limited number of datacentres can claim an availability (uptime) level correspondent to a Tier 4.
“This is unless maybe you’re NASA,” Emerson Network Power Australia’s director of marketing, Peter Spiteri, said. “Tier 4 is almost an impossible specification. It means you need every element replicated: Dual buses, processes, all the way from the chip to the grid. It means everything is redundant.”
HP critical facilities practice lead, technology service, Mark Toner, said most commercial/small enterprise customers are running a Tier 1 or Tier 2 facility. “Clients in multi-tenant offi ce facilities struggle with Tier 2 and beyond, due to the plant equipment requirements and cost,” he said.
APC Pacific vice-president, Gordon Makryllos, agreed, and said only a limited number of datacentres could claim an availability level correspondent to a Tier 4 globally. In reality, the most common tier rating for private organisations is a Tier 2 datacentre, while the most common rating for commercial facilities is a Tier 3. Confused yet? Many customers are. Like Tran, HP’s Toner said there was a lot of misinformation about the datacentre tier system.
“Many customers are tier confused,” Toner said. “They’re not sure what tier they have or what tier they need based on business availability requirements.” Given that uncertainty, resellers can offer a host of services. Doing a datacentre reliability and capacity assessment, which explores the business IT availability requirements, is a good start, Toner said.
“Most clients don’t have a computer room facility strategy. Many have thought about IT infrastructure growth projections and whether their current facility will be sufficient for their needs,” he said. “A good datacentre engagement strategy can help answer questions such as: How many facilities do I need; what size, power density and tier rating; what location; what topology; how much will it cost and when; and what’s the business case for change?” ---P---
POWER AND COOLING COMBO
Another major concern to factor into datacentre decisions is power. Energy costs are going through the roof (power bills are going up 20 per cent per annum) and there’s a shortage of power available in Australia. This makes the power and cooling aspect of the physical infrastructure absoutely critical. Power protection is a great datacentre conversation opener, Emerson’s Spiteri said, and the simplest way for an IT reseller to move into the area of support and infrastructure.
“It’s the gateway and the next logical stepping stone. But be careful: Partner up if you don’t have the skills,” he said. “Power is electrical and air is mechanical. There are risks if trying to run it gung-ho. The outcome is catastrophic: Fire and explosions.” Spiteri knows all too well the feeling of being electrocuted and stressed the importance of seeking engineering assistance.
He also highlighted some power-related technology required in a datacentre build. The list includes: UPS power protection (N for Tier 2, N+1 for Tier 3, 2N for Tier 4); power and data cabling; transfer switches for immediate changeover; security (biometric, security cameras, iris/fingerprint scanners); air conditioning; backup sites/disaster recovery sites; redundant internet connections; managed services (cloud computing); and software (firewalls).
APC’s Makryllos said the largest opportunity for the channel is related to the effi ciency of the datacentres and to those unique technologies, like InRow cooling systems and hot aisle containment, that can make high effi ciencies possible in datacentres of any size and density.
At the same time, clients are increasingly looking for a greener approach in order to maximise energy efficiency and save on costs, IBM datacentre business executive, David Yip, said.
“The green push is a major factor in datacentres,” he said. “Datacentres are a large consumer of power. Power from the facilities system can consume just as much as IT facilities.”
NEC product manager for datacentres and hosted solutions, Loren Weiner, pointed out power consumption at the rack level had increased by a factor of eight in the last 10 years.
“Today, the cost of power/cooling is higher than hardware costs and is expected to continue rising,” he said. The design and operation of a green datacentre requires a set of technologies that minimise the carbon footprint of the building such as: Using low-emission building materials; incorporating sustainable landscaping; operating waste recycling systems; installing catalytic converters on backup generators; and implementing alternative energy technologies such as photovoltaic, heat pumps and evaporative cooling.
NEW FACE OF NETWORKING
But while power is on everyone’s mind, networking in the datacentre has seen some seismic shifts resulting in new products, protocols and ways of managing the server, Cisco consulting engineer, Brad Engstrom, said. “Over the last 10 years, the datacentre and campus networking design stayed the same. But within the last year, there has been a big change. Resellers need to relearn networking for datacentres,” he said. “Most of this started because of the trend towards server virtualisation.”
Engstrom highlighted two big changes: The edge of the network moving into the virtualised server; and the collapse of Ethernet and fi bre channel (storage networking) onto a unified fabric.
“This means resellers need to do some cross-skilling. It means networking people need to know about storage networks, Ethernet networking and server virtualisation. Resellers will be a key conduit to getting the new datacentre designs in front of customers,” Engstrom said. Another way to save on capital and operational costs is to implement a hybrid tier approach. For clients with datacentre facilities of less than 300sqm, HP advocates multi-tier facilities.
“Historically, organisations have approached the problem with one size fits all. If a client has a critical system that requires Tier 3 availability, then traditionally the entire facility was designed and built Tier 3,” Toner said. “However, other systems in that datacentre, like testing and development systems may have a lesser availability requirement and can suffi ce with a lesser tier.”
Emerson’s Spiteri said managed services are the sweet spot for resellers in the co-location space.
“It’s not just having the cow and giving the milk, it’s getting the meat from the cow as well. The reseller is not only charging for the space, but is saying, ‘Do you want software-as-a-service with that, do you want cloud computing. Can I manage your network and hook up alarms’,” he said.
NEC’s Weiner said the growing use of media-centric content on the Internet, the infrastructure for cloud computing and for newer high density power users, means companies – both large and small – are turning to co-location.
“There’s a good opportunity for the channel to become co-location hosting providers,” Weiner said. “They can service the needs of their smaller customers and add an additional revenue model to take advantage of the shift to outsourced and cloud computing.”
IDC’s Oostveen added CIOs aren’t too eager to open the purse strings and build a new facility.
“Despite the age of datacentres, IDC research shows that very few CIOs [less than 10 per cent] intend to build a new facility. The reason for this is cost, with new facilities ranging upwards of $10 million for localised facilities and over $100 million for enterprise class centres,” Oostveen said. “Instead of building new facilities, these CIOs are looking for ways to upgrade and refi t their existing datacentre. The opportunity for the channel is tremendous and should be centred on providing consultative oriented delivery of services and product.”