Since the 1990s, the New Mexico-based think tank and professional services organisation, the Uptime Institute, has had in place a four tier model when investigating datacentre availability.
A tier-1 datacentre is the least available, and a tier-4 one is the most available. When it comes to critical applications, the typical customer is going to be looking at a tier-3 or 4 datacentre, as they are guaranteed to have little to no downtime. Or that’s how the system is meant to work.
Obviously, datacentre availability is a critical component for a customer shopping around. In many requests for tender, a tier-3 minimum spec is written down in the guidelines, so even if the provider doesn’t officially run with the tiered system, it will still be compared to it by the customer - and it is important to note that not all datacentre providers will work to the Uptime Institute’s tiering system.
Equinix is one example. A major datacentre player, it maintains 87 facilities around the world, but avoids comparing itself to the conventional tiered system.
“We build to our own internal guidelines, so from that perspective we’re not always comparing ourselves to the Uptime Institute,” Equinex Australia managing director, Darren Mann, said.
“I think over the last 10 years we’ve gained the experience of operating datacentres, and I think for a customer experience and balancing it against capital expenditure required, we need to get the best outcome with each facility we build.
“Having said that the basic design philosophy we have is to maintain a concurrently running datacentre. If you needed to compare that with the tiered system, that equates with the Uptime Institute’s tier-3 level.”
A source of confusion
One of the concerns facing the datacentre industry, and something customers need to be aware of, is that organisations (accidently or otherwise) occasionally misrepresent their datacentre as belonging to a certain tier. A scenario just as likely is that a customer can misunderstand a tiered rating for one part of the datacentre to mean the whole datacentre belongs to that rank.
Because the datacentre tier ranking takes into account the whole picture, what often happens is a datacentre will have different parts of the infrastructure belonging to different tiered categories.
“I think people loosely use the term ‘tier',” Emerson Network Power national product manager, Mark Deguara, said.
“The confusing part is that in a lot of presentations which people attend, the speakers will start talking about the tiers, and they might only be talking about the electrical component, and the listeners take that as ‘I’ve got a facility that matches the electrical component'.
“Tiers encompass, in terms of facility, the whole spectrum of components: there’s the security aspect of it, there’s the electrical aspect of it, there’s the cooling aspect, physical location aspect, and so on.”
From a cost perspective, building a complete tier-4 datacentre is impractical, requiring (in a basic sense) double of everything. Concurrently running UPSes, dual air-conditioners, and double redundancy is a minimum requirement for a tier-4 datacentre, but other considerations need to be taken into account. Location is also important - a tier-4 datacentre needs to be within a certain distance of a major city, and then there are concerns around Australia’s notoriously unpredictable electricity grid.
“I think the Government needs to make some investments in the power infrastructure around the state, and I believe they’ve recognised that,” Equinix's Mann said.
“Having said that, operators like us need to design their facilities to respect these disruptions.”
As such, datacentre operators will often be selective in maximising redundancy where needed – even to tier-4 standards – while taking calculated risks in other fields, yielding an overall tiered ranking of three, or perhaps even two.
The situation is much the same from the customer point of view. Eaton product manager for data centre solutions, Ciaran Bolton, said customers were looking to strike a balance between efficiency and reliability. “You can achieve a high reliability without having to go all the way to tier 4,” Bolton said. “There aren’t too many tier-4 datacentres in Australia. You can still offer that inherent reliability without have the huge amount in infrastructure with a tier-2 or -3 datacentre.”
Confusing, isn’t it?
And it’s all about to get even more confusing. No longer content with a basic four tier model, the Uptime Institute in May announced it had developed a new standard. To be released on July 1, the Operational Sustainability Standard works in tandem with the existing Tier Classification System, and establishes a standard for risk mitigation and site management behaviours by tier level. It also provides a means to rate how effectively each datacentre is managed and operated based on each tier’s criteria. Ratings will be split into Gold, Silver and Bronze.
So, for instance, a tier-3 facility that is managed optimally would be rated tier-3 Gold. APC Asia-Pacific CTO, Christian Bertolini, said partners would need to understand these additional categorisations, as they will be important in properly representing the quality of a datacentre. “Starting tiering is only based on the design – how many components are in the system. The new classification will be based on how the system is operated and maintained throughout the lifecycle of the datacentre,” he said.