Menu
Tech watch: Server flexibility pays

Tech watch: Server flexibility pays

Scaling a server has become a critical option for vendors and their partners to offer customers in a world infused by cloud fever

When an organisation invests a significant amount of money on a box, it wants that box to work.

Even more, it wants that box to keep working, even if the organisation has an influx of staff and application utilisation, or even decides to take the plunge into the cloud.

That box we’re talking about is, of course, the server. And typically, the modern customer wants a server that is scalable.

“We’re trying to help customers minimise risk as much as possible,” HP ProLiant product manager, Andrew Cameron, said. “Some of the things we’re focusing on now HP has been focusing on for many years. Focusing on making sure that power and cooling are a major part of our investment portfolio, so when we talk about power and cooling it doesn’t just start with the server, it extends to the entire datacentre that the server sits in, but specifically from a server perspective we’re packing more sensors into servers.”

From a scalability point of view, HP’s flagship product is its BladeSystem Matrix, a converged infrastructure offering for delivering shared services. It’s integrated with HP Server Automation, and claims to lower storage costs for physical and VM-based workloads across fibre channel, NFS and iSCSI with tiered storage tagging.

How does this fit in with the scaling story? “BladeSystem Matrix use a piece of software we’ve had for a long time, called Insight Orchestration, that enables customers to provision servers very quickly, and sets a partner up as being a service provider rather than just a function for customers,” Cameron said. “We’re finding that our customers are starting to hook into that vision and understand that they can now charge out there and host their IT infrastructure internally, it means they are being able to realise some of the efficiencies that some of the cloud providers like Google already offer using these technologies.”

Avoiding the rip

The topic of scalability is one that Dell holds especially true to heart. The vendor approaches the server business entirely from this perspective, and has little interest in rip-and-replace strategies.

“We build on a building block foundation to allow customers to scale their business as their requirement to grow changes,” Dell enterprise marketing manager, Justin Boyd, said.

Like with HP, cloud is the specific scaling model that Dell has focused on of late, with its recent PowerEdge PC range aimed at customers looking at doing internal or external clouds.

With cloud (or HPC- high-performance computing, and Dell’s other server target) deployments, a lot of redundancy is already built into the core of the infrastructure, so the need to have a lot of redundancy at the server level is no longer needed.

“That’s where some of the key total cost of ownership benefits comes into play, where the redundancy is actually reliant on the cloud or the HPC environment to make sure all of the nodes are up at any one time,” Boyd said.

At this stage, this kind of customised server design is niche, Boyd said, but it was something to keep an eye on into the future.

“I think IDC has predicted the A/NZ marketing to be about 25,000 servers a quarter, and the vast majority in that market is going to be full featured,” he said.

“As x86 is taking over more and more of the datacentre, these specialised products that are being designed for these types of tasks are definitely going to increase over coming quarters and years.”

It’s a view that IBM shares. IBM business development manager, System x, Peter Hedges, said there was a relatively unique case emerging where customers were asking for features to be stripped out of a server to improve overall efficiency. “I’d say it’s a niche area in the x86 space,” Hedges said. “In some instances, levels of availability and redundancy exist within the application or software layer, so for example Web serving – where, if there’s an outage of a server, the Web delivery continues, a user that is logged in and being service by one machine would just hit the refresh button and they’d get served by another machine.” In those situations, customers are saying they don’t need redundant network cards or power supplies, or highly redundant disc arrays within each and every server node in the solution.

But Hedges emphasised that is was a niche occurrence, and highlighted a popular example in which the customer will certainly want all the features in place.

“VMware – fantastic infrastructure solution, but I still don’t want a VMware host going away and taking down with it 10, 15, 20 workloads because there was a physical failure,” he said.

“Yes, those workloads might get restarted, but that’s still an outage. There’s definitely the need for both sides of the coin.”

Read more: New guide for telcos and ISPs dealing with customers' financial hardship

And in continuing the two-faced coin motif, there is still room for the rip-and-replace engagement, according to HP.

“I think every customer is different, and what we try and do is have services and tech people within the services group that help customers through those scenarios,” HP’s Cameron said.

“For some companies it may be rip-and-replace, for others it may be over time they get a BladeSystem Matrix and slowly transition their environment over, and we’re very keen and willing to engage customers to help them through that transition,” he said.

The key theme for the servers industry still seems to be that choice is king. Yet, at the same time, the trend towards scalable architecture continues to gain ground, and it pays to note that, within that scalable architecture that vendors offer, different customers are going to require different feature sets.

Flexibility pays.


Follow Us

Join the newsletter!

Error: Please check your email address.

Tags flexibilityVMCloudJustin BoydAndrew Cameron

Show Comments