Menu
In depth: The infinite datacentre

In depth: The infinite datacentre

Tablets making inroads in the enterprise and power costs continually on the rise were some of the key topics of last year, and Gartner expects these issues to grow in importance in 2012 and beyond

After an eventful 2011, this year is already looking to become an interesting period in IT, and none more so than in the infrastructure space. During his speech at the recent Garner Infrastructure and Operations Data Centre Summit in Sydney, Gartner research VP, Phillip Sargeant, highlighted several trends that people need to keep an eye on in the next six to 12 months or risk being left behind. One of those trends was the consumerisation of IT and how it correlates with the introduction of the media tablet.

While Sargeant admits that tablets are nothing new and have been available for years in different forms, it was not until the introduction of the iPad in 2010, and iPad 2 the following year, that enterprises finally sat up and began paying attention.

“The consumer employees saw them as a device that they wanted at home, and soon discovered that some of their day to day business functions, such as email and calendaring, became simple to use as well,” he said. “This quickly became a reason to begin using iPads in the office environment, often before IT was aware of it, and before security, data storage and usage guidelines were put into place.”

According to Sargeant, the adoption rate of iPads took on a new dimension when executives started to either buy them for their teams as an incentive or a perk. “They also told IT that these devices were ideal for ‘travelling executives’ and should be added to the mix of ‘approved’ devices,” he said.

"As a result of this, IT had to adopt new implementation strategies within a short time frame, often with only “a minimalist view” of the long-term objectives. “These devices can be very useful in business, but a solid, well-defined and communicated strategy needs to be put into place,” Sargeant said.

Power and the Data Centre

The rapid evolvement of IT has meant that server power consumption is continually dropping and power efficiencies are improving.

“Today's Nehalem processors, at full load, consume less energy than a five-year-old server at idle with 10 times the performance,” Sargeant said.

Gartner has found that the average traditional asset life cycles for servers is now roughly between four and five years, meaning that companies are tending to hold onto their older servers to use with “less mission-critical” or “compute-intensive” situations.

“While this was often done to reduce capital costs in replacements, the cure was often worse than the disease, since the energy consumption of these older servers was significantly more than newer ones,” Sargeant said.

Another way Sargeant suggests to approach thus type of asset life cycle management is to use Moore's Law for upgrades, though in the case of increasing processor and core density, he recommends that the question of "how best to use them" is addressed.

“While some enterprises will continue the virtualisation push, others are beginning to realise that business-critical applications designed for x86 may need to be rewritten to take advantage of four-core and greater systems,” he said. “This will reintroduce the concepts of parallel processing and parallel development methodologies, while at the same time create its own series of cascade effects across the IT group.”

These “cascade effects” that Sargeant is referring to how IT development teams will change, both from a skills perspective and core methodologies, to meet the requirements of parallel processing.

“Senior staff may not be able to adapt as quickly as needed, which will force a new look at acquiring outside talent to augment the staff,” he said.

With tightening budgets meaning that fewer new data centres are being built, many companies are looking at scaling their data centres vertically through density, and virtualisation is now being used as a way to achieve this.

According to Sargeant, if virtualisation is well implemented, it can push the average server performance of seven to 12 per cent to the region of 40 to 50 per cent, which would translate into floor space and energy savings, as well as added agility through better provisioning speeds.

However, the number of cores per server and overall energy consumption trends in the datacentre are some of the questions looming on the horizon.

“The core issue will have to be addressed within IT operations from a performance and licensing point of view, and from applications during initial design phases of new projects, where old methods of linear coding will not be effective,” he said. “Four- and eight-core systems are becoming common, and 16 cores will be common within two years, so parallel coding techniques need to be addressed soon.”

Increases in energy costs means that organisations are no longer able to overlook this expense any longer, especially in Australia where the Carbon Tax passed through the Senate to become a law later this year. “On-site generation as a complement to the electrical grid is increasingly financially viable when supported by grants, feed-in-tariffs, tax incentives, the sale of renewable energy certificates or their equivalent, and so on,” Sargeant said.

Having treated energy as a “fixed and relatively low operational cost” for many years, Sargeant sees many industrial, commercial and public sectors having limited energy management capabilities when faced with the current situation.

“They frequently lack granular data related to how and where energy is being consumed around the enterprises, often not even beyond the utility bill,” he said. “The capabilities they do have are frequently at a local level and are often patchy.”

This long history of complacency means that organisations do not have the sufficient process, systems, roles or expertise to oversee energy consumption with the goal of optimising demand and supply across the whole enterprise.

“They typically do not have the technology platforms that are commensurate with the risk and importance of energy management to their enterprise,” Sargeant said.

Better Tracking of IT Consumption

The recent push for data centre efficiency metric is due in no part to an increased concern for the environmental impact of these centres. So far, the metrics that have been proposed attempt to correlate a relationship between total facility power delivered and IT equipment power available, though Sargeant feels that the scope is still a bit limited.

“Although these metrics will provide a high-level benchmark for comparison purposes between data centres, what they do not provide is any criteria to show incremental improvements in efficiency over time,” he said.

The other limitation that Sargeant points towards is that the proposed metrics just focus on the differences between power supplied and power consumed, and do not allow for monitoring the effective use of the supplied power.

“For example, a data centre might be rated with a PUE of 2.0, an average rating, but if that data centre manager decided to begin using virtualisation to increase his or her average server utilisation from 10 per cent to 60 per cent, while the datacentre itself would become more efficient using existing resources, then the overall PUE would not change at all,” he said.

According to Sargeant, a “more effective way” to analyse energy consumption is to track the effective power use of existing IT equipment with the performance of that equipment in mind.

“While this may sound intuitively obvious, as everyone wants to have more efficient IT, a typical x86 server will consume between 60 and 70 per cent of its total power load when running at very low utilisation levels,” he said.

The important point that Sargeant wants to drive home is that raising utilisation levels has a small impact on power consumed but a big influence on effective performance per kilowatt.

“Pushing IT resources toward higher effective performance per kilowatt can have a twofold effect of improving energy consumption and putting energy to work, as well as extending the life of existing assets through increased throughput.”


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments