Menu
Cloud computing's bottleneck and how to address it

Cloud computing's bottleneck and how to address it

Where once memory was a culprit, bandwidth is emerging as the next major inhibitor

Virtualization implementers found that the key bottleneck to virtual machine density is memory capacity; now there's a whole new slew of servers coming out with much larger memory footprints, removing memory as a system bottleneck. Cloud computing negates that bottleneck by removing the issue of machine density from the equation-sorting that out becomes the responsibility of the cloud provider, freeing the cloud user from worrying about it.

For cloud computing, bandwidth to and from the cloud provider is a bottleneck. We recently performed a TCO analysis for a client, evaluating whether it would make sense to migrate its application to a cloud provider. Interestingly, our analysis showed that most of the variability in the total cost was caused by assumptions about the amount of network traffic the application would use. This illustrates a key truth about computing: there's always a bottleneck, and solving one shifts the system bottleneck to another location. Virtualization implementers found that the key bottleneck to virtual machine density is memory capacity; now there's a whole new slew of servers coming out with much larger memory footprints, removing memory as a system bottleneck. Cloud computing negates that bottleneck by removing the issue of machine density from the equation-sorting that out becomes the responsibility of the cloud provider, freeing the cloud user from worrying about it.

For cloud computing, bandwidth to and from the cloud provider is a bottleneck. For some applications, the issue is sheer bandwidth capacity-these applications use or generate very large amounts of data, and the application user may find that there's just not sufficient bandwidth available to shove the data through, given the network bandwidth made available by appropriate carriers. A term often used for this is "skinny straw," inspired by the frustration one experiences when trying to suck an extra-thick milkshake through a common beverage straw. The TCO exercise illustrates a different skinny straw-an economic one. For some applications and some users, the bandwidth available may be technically sufficient, but economically unviable.

This problem is only going to get more difficult. The excellent UC Berkeley RAD Lab Report on Cloud Computing noted that price/performance of network capacity lags that of both compute and storage, indicating that this will be an issue well into the future. On the other hand, this is a price/performance issue, which is to say another way it could be addressed is to drop pricing of transit bandwidth through making more available. As I noted in my discussion of the recent Structure 09 Conference, during a panel on the topic of bandwidth availability, the AT&T representative stated that the issue is not network capacity, but business case.

What's interesting about the recent Google Voice/iPhone App Store dustup is how it relates to the future role of the network. Those who believe AT&T was behind the Google Voice rejection describe the motivation as reflecting the carrier's fear of being relegated to a "dumb pipe," reduced to doing nothing more than ferrying other people's bits, rather than providing its own high-margin network applications. If that is truly AT&T's reasoning, it indicates the enormous opportunity the near future holds in being the solution to the skinny straw issue. A cascade, a torrent, a deluge of data is going to want to move around the network, and being the "dumb pipe" that carries it is going to be far more lucrative than trying to compete in figuring out what the next great network-intensive application is going to be. Simply put, cloud computing, in all its *aaS vehicles, is going to be the future of application delivery, with a complementary explosion of network traffic.

As a cloud user, this fact-that network traffic is becoming a far larger part of application deployment -- will affect cloud computing applications and architectures for the foreseeable future. This is going to be a tricky topic because, as noted earlier, as bottlenecks are addressed, they shift. With respect to cloud bandwidth, one can expect that the bottleneck will be gradually and incrementally relieved, meaning that assumptions about network cost and availability will need rethinking every six months or so-the application architecture that made sense six or 12 months ago might not at another point in time.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Cloudcloud computingBernard GoldenHyperStratuscloud computing bandwidth

Show Comments