2011: When cloud computing shook the data center
- 27 December, 2011 22:09
If I had to sum up in one word the most exciting thing that happened to cloud computing in 2011, I'd have to say it's OpenStack. This open source project, launched by Rackspace and NASA in late 2010, is assembling a private cloud "operating system" for the data center that promises vast increases in operational efficiency. The momentum behind it is phenomenal; at last count, 144 companies back the project, including Cisco, Citrix, Dell, HP, and Intel.
But at the same time, the public cloud is surging -- and not just Amazon and Salesforce, though those two remain the largest public cloud service providers. The telcos (notably Verizon) are gearing up to deliver IaaS (infrastructure as a service) at a larger scale than ever before. Microsoft, HP, and others are also building out huge public cloud capacities.
[ Download the Private Cloud Deep Dive by InfoWorld's Matt Prigge. | See the Test Center review: "Ruby clouds: Engine Yard vs. Heroku." | Download the Cloud Services Deep Dive by InfoWorld's David Linthicum. ]
On the one hand, with OpenStack, we have a vibrant, fast-growing open source project for creating private clouds, flanked by VMware, which offers a proprietary portfolio of private cloud software. On the other hand, we have an increasing number of businesses seriously asking, "Do I really want to run my own data center?" For those that don't, the public cloud is getting more attractive all the time.
Going publicDuring a recent visit to InfoWorld, Kerry Bailey, president of Terremark (now chief marketing officer for Verizon Enterprise Solutions), was exceptionally bullish in his predictions for public cloud growth. Acquired earlier this year by Verizon, Terremark operates a public cloud IaaS play that's 100-percent VMware -- the enterprise virtualization vendor of choice.
Bailey says Terremark has seen 178 percent growth in its cloud business from 2010 to 2011, with current revenues in the hundreds of millions. He also says that the No. 1 objection to the public cloud, security, has been replaced by performance -- which Terremark has addressed with proximity. According to Bailey, Terremark now has a physical presence "in all the NFL cities" in the United States. And thanks to Verizon, managed routing services enable "direct access to the backbones of the world's leading carriers" to ensure high quality of service.
Recent high levels of customer demand led Bailey to predict that IDC's estimate of cloud growth -- to $148 billion worldwide by 2014 -- may be missing the mark by several multiples. "Try $600 billion or even $750 billion," says Bailey.
Such aggressive numbers may be self-serving, but the ranks of public cloud boosters are growing. I recently spoke to Joe Coyle, CTO of Capgemini, who believes "the telcos are going to be huge" players in public cloud services. Moreover, he says, in some engagements he is "hard-pressed to come up with a reason to be in your own data center anymore."
In economic times like these, up-front cost is clearly a factor. Conventional wisdom says that sunk cost in infrastructure will prevent enterprises from migrating to the cloud. Who would simply abandon all that stuff? But that formulation changes when rack upon rack of servers reach the end of their useful lives. You can gear up for another major capital investment in hardware -- or turn to a public cloud service provider instead.
The same dynamic applies to SaaS vs. conventional on-premise software -- paying as you go can be a lot more palatable than paying for servers and licensing fees up front, especially when it's dirt cheap. To take one example, Google Enterprise vice president Amit Singh recently told me that 5,000 businesses per day are signing up for Google apps, as opposed to 3,000 per day one year ago.
Marshaling the private cloudIt's worth noting that even if Bailey's wildest predictions turn out to be correct, spending on the public cloud would still amount to little more than 20 percent of global IT spending by 2014. The rest will be spent on customers' own IT infrastructure and personnel. In large IT operations, the private cloud -- born of technologies and techniques pioneered by public cloud providers -- will provide the path to new levels of efficiency and agility.
As InfoWorld's Matt Prigge observed in his post "How I learned to stop worrying and love the private cloud," pervasive server virtualization has created a crying need for software to manage pooled resources. So-called private cloud software addresses that need with many moving parts, including virtualization management, metering and chargeback systems, automated configuration, identity management, self-service provisioning, application management, and more.
Though far from complete, the OpenStack private cloud solution is compelling in part because it follows a Linux-like open source model. Today, under an Apache license, the OpenStack "kernel" has three components: Compute (for managing large networks of virtual machines), Object Storage (for massive storage clusters), and Image Service (for managing virtual disk images). Around that kernel -- as with Linux distros -- vendors add value. The leading commercialized version of OpenStack is Project Olympus from Citrix; startup vendors Internap, Nebula, and Piston Cloud Computing also use the OpenStack core.
Between its debut in October 2010 and today, OpenStack has already undergone four revisions. The fifth, code-named Essex and scheduled for release in spring 2012, will include two new components: Identity, for authentication and authorization, and Dashboard, a UI for managing OpenStack services.
But OpenStack is hardly the only game in town. Its best-known competitor is Eucalyptus, a private cloud implementation of Amazon Web Services that enables you to move workloads back and forth between Amazon EC2 and Eucalyptus (which also comes in an open source version). Then there's Puppet, a wildly popular configuration management framework designed to automate almost any repeatable task in the data center. Puppet can create fresh installs and monitor existing nodes; push out system images, as well as update and reconfigure them; and restart your services -- all unattended.
If you're willing to pay the licensing fees, you can even build an all-VMware private cloud. Virtualization is the underpinning of the private cloud -- and VMware still offers the most advanced virtualization management tools. In October 2011, VMware announced three new suites to "simplify and automate IT management," including vCenter Operations Management Suite (an update of vCenter Operations for monitoring infrastructure and managing configuration), vFabric Application Management Suite (mainly devops tools), and IT Business Management Suite (to report on operating expenses, services levels, and so on).
The cloud panaceaYet for some reason, all these efforts to automate everything simply remind me how complex the data center really is. In a recent presentation by VMware vice president of products Ramin Sayar, I was struck by how ambitious VMware's plans seemed -- how many different types of managers and administrators all that software needed to serve -- and how much cost and effort might be incurred in wrapping it all the way around the data center. The road to simplicity seems paved with even more complexity.
The irony is if you choose to relocate your data center to the public cloud, that complexity will not magically disappear. IaaS is still infrastructure. You won't need to pay for hardware up front, and you won't need to employ people to stand up boxes or reroute cables, but your own IT people will still need to watch the meters and turn the dials remotely. Very likely, they'll need cloud-specific skills on top of the usual skills required to run a data center.
Ultimately, IT's mission is to deliver applications -- either bought or built for the business. In the long run, the cloud that really simplifies IT will largely be composed of SaaS and PaaS (platform as a service). Slowly, haltingly, Microsoft is moving in that direction with Office 365 and Azure. Salesforce lives there and its newly acquired PaaS play Heroku now goes beyond Ruby to support Node.js, Java, and Python. And of course, there's Google Apps and Google App Engine.
Those are just a few big names amid hundreds of SaaS and PaaS players. But it's still too early for any but the smallest startup to consider going without local infrastructure at all. Instead, we're entering a long hybrid cloud period, with a chunk of pubic cloud infrastructure over here, some SaaS apps over there, and a local data center that -- through Herculean efforts to overcome complexity -- will be somewhat easier to manage thanks to private cloud software.
All that will need to be integrated together. Gaurav Dhillon, CEO of cloud integration startup SnapLogic, wants to supply that connective tissue between cloud services and on-premise applications -- as do several other public cloud integration services, including Boomi, acquired by Dell a little over a year ago.
Dhillon recently told me "2012 is the year the enterprise cloud...the first time enterprises use the public cloud in a big way." Maybe so, although it will still be a small slice of the enterprise IT spend. I have little doubt the cloud will triumph in the end -- the economies of scale are just too compelling. But we're at the beginning of a very long ascent skyward, with many convoluted twists and turns along the way.
This story, "2011: When cloud computing shook the data center," was originally published at InfoWorld.com. Follow the latest developments in cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.
Read more about cloud computing in InfoWorld's Cloud Computing Channel.