Paradigm shifts were easier before the bubble burst. Serious change costs serious money, and few IT organisations have gobs of green stuff to throw around anymore. So it's no surprise that utility computing - hailed as the biggest paradigm shift since the first disk drive spun up - has stalled. It doesn't help that the marketing geniuses who came up with the concept still can't agree on what it means. There are three basic definitions.
Utility as an on-demand computing resource: Also called adaptive computing, depending on which analyst or vendor you talk to, on-demand computing allows companies to outsource significant portions of their data centres, and even ratchet resource requirements up and down quickly and easily depending on need. For those of us with grey whiskers in our beards, it's easiest to think of it as very smart, flexible hosting. Utility as the organic data centre: This is the pinnacle of utility computing and refers to a new architecture that employs a variety of technologies to enable data centres to respond immediately to business needs, market changes or customer requirements. Data centres not only respond immediately, but nearly effortlessly, requiring significantly less IT staff than traditional data centre designs.
Utility as grid computing, virtualisation, or smart clusters: This is just one example of a specific technology designed to enable the above definitions. Other technologies that will play here include utility storage, private high-speed WAN connections, local CPU interconnect technologies, blade servers and more.
These three descriptions are different enough to seem unrelated, but in fact they're dependent on each other for survival. Should utility computing ever live up to its name - a resource you plug in to, as you would the electric power grid - then that resource needs to be distributed, self-managing, and virtualised. Whether that grand vision will ever be realised is an open question, but at least some of the enabling technologies are already here or on the horizon.
Here and now
The on-demand version of utility computing is the one closest to fruition. Vendors such as EMC, HP, IBM, and Sun have been selling it for some time. This year Sun has been the noisiest of the bunch, recently announcing that it wants to be the electric company of off-site computing cycles.
"Sun has decided to take utility to a whole new level," Sun's vice-president of marketing for utility computing, Aisling MacRunnels, said. "We are building the Sun Grid to be easy to use, scalable and governed by metered pricing. We're also incorporating a multi-tenant model that allows us to provide a different scale of economy by pushing spare CPU cycles to other customers."
The Sun Grid is comprised of several regional computing centres, each running an increasing number of computing clusters based on Sun's N1 Grid technology.
Sun wants to cut through utility computing confusion and attract customers but also has a few chasms to cross, which is why the Sun Grid still isn't commercially available.
"The goal once we're out there is to be able to give additional CPU resources to our customers immediately," MacRunnels said. "That's a big challenge for us. Right now we know we're not yet commercially viable, which is why we're only chasing specific application markets. We need to walk before we run."
President and principal analyst at market research firm Pund-It, Charles King, has a rather cynical take on Sun's offering.
"What Sun is selling isn't really new; it's been offered by IBM and HP for several years," King said. "Sun has simply gotten more specific and done what it does very well, which is simplify something highly complex with a great marketing slogan."
Most analysts agree that IBM leads the field in offering utility-based services to clients of its On Demand and Global Services departments. "Other companies are wrapped up in the whole notion of access to compute power," vice-president of deep computing at IBM, Dave Turek, said.
"But computing power comes in many forms, including not just grids and virtualisation, but also more standard forms of hosting. It depends entirely on customer needs, and these change quickly."
According to Turek, IBM's On Demand service is all about providing solutions tailored to individual requirements.
"Utility should be a base kind of service just like water or electricity. But where those services are rigid, On Demand's intrinsic value needs to be wrapped up in customer need, and that means exceptional flexibility."
HP agrees, having coined its service name as the Adaptive Enterprise, but touting the same organic message requiring IT infrastructure that responds to changing business requirements.
"We have made an announcement on our grid strategy," vice-president and CTO of HP's Software and Adaptive Enterprise unit, Russ Daniels, said. "But that's really a specialised application. We feel utility computing refers to technology applied to business process."
Today, HP has customers accessing its resources for increased computing power similar to the Sun Grid, but like IBM, it also places consulting, traditional hosting, and even several on-site products under its utility umbrella.
Getting the message?
Most customers understand the benefits of flexible hosting. But what of the organic, virtualised, self-managing data centre - assuming it can be achieved? Forrester sees the grand concept of utility computing as a solution for three key problems: wasteful technology purchases, unnecessarily laborious IT processes and rigid IT capabilities that by definition paralyse business processes. Nail those three and you can get a lot more out of its existing resources. The initial investment in provisioning and virtualisation eventually justifies itself by reducing capital expenditures, slowing the growth of IT staff and providing the business with new agility.
Ultimately, a company could run multiple workloads on fewer machines in fewer data centres, and accomplish this through the use of multi-system architectures such as blade-based systems, clusters, or grids. That's only one example, of course. Combining that hardware with a reduced number of platform architectures means faster processing, faster reaction time and less staff training.
Forrester analyst, Frank Gillett, emphasised the business benefit.
"Organic IT isn't just about IT being able to respond to business requirements," he said. "It's about doing that on the fly. And the technology you purchase has to manage that using standardisation and automation to keep costs low."
The utility computing services being offered by Sun, HP, IBM, and others are simply outsourced versions of this same concept.
Sun's MacRunnels doesn't think customers will view her product in traditional outsourcing terms at all in a few years. Sun claims it's talking to electronic trading exchange, Archipelago, about allowing customers to trade excess purchased CPU cycles to each other during down cycles.
Grids provide a perfect entry into the utility-computing space because they follow the golden rule of offering more for less: namely, the power of a supercomputer for the price of a few workstations. They offer unheard of flexibility and they don't require you to rip out existing infrastructure. And these benefits extend to outsourcers as well as those running grids in-house.
Although the standards for hardware grid management are evolving rapidly, software must be designed specifically for use on a grid. This is difficult and time-consuming because it requires a rewrite to comply with a message passing interface, the foundation of grid computing.
By itself, this is reason enough for most enterprises to have ignored grid computing. During the past year or so, however, new toolkits - such as the one from the Globus Alliance - have arrived to help this process.
Grids are only one example of utility computing's technology challenges. Other areas that need work include storage, WAN issues, security, and compliance. "That's really unavoidable right now," HP's Daniels said, "since the ramifications of computing as a utility are so hugely complex."
Daniels goes on to cite HP's commitment to creating a utility-storage model. "After all," he said, "where's the data? Companies that need additional computing resources typically have large, even vast, quantities of data. That means for utility computing to be viable, you've got to have a working model for utility storage."
HP has released several products and management initiatives aimed at providing a utility model for storage, but it has yet to tie any of them into a coherent utility-computing offering.
Making a plan
So how does IT plan for a migration to the utility model? "Start by understanding application diversity," CTO at Penguin Computing, Don Becker, said. "What runs on what? This is important, as you'll need a management solution that works for each platform."
He also advises moving to a standard hardware platform, the Intel/AMD model being his favourite.
"Finally, look to move to a single operating platform," he said. "Presently, Unix is the system of choice for all things utility, as you simply have more options under Unix than you do Windows."
Within this framework, begin evaluating all new technology purchases with utility goals in mind.
"Don't just look at a single vendor's commitment to utility," King said. "Make sure that every vendor you work with from now on can support as much of your infrastructure as possible."
Each technology player should be evaluated against a utility goal that reflects an organisation's unique combination of business needs. Although software products are still evolving, the hardware platforms are maturing rapidly.
"Sure, there are still important tools missing," Forrester's Gillett said. "But the cost benefits of this architecture are simply too compelling to ignore."