Menu
Is Google too big to fail?

Is Google too big to fail?

Google has already achieved the enviable marketing distinction of turning its name into a verb.

Google has already achieved the enviable marketing distinction of turning its name into a verb. But its enormous popularity and global reach place an unintended burden on the search giant: When it goes down, the entire Web is shaken.

That’s exactly what happened on May 14, when Google suffered a major failure. A routing error sent traffic to servers in Asia, creating what Google called “a traffic jam”. According to the company, 14 per cent of its users experienced slowdowns or outages. Many accounts put the number of those inconvenienced quite a bit higher. And we can’t even guess at how many people were seriously put out by subsequent outages including the Google Gmail failure last month.

What got my recent attention was a study of Internet usage by Arbor Networks, which found that just 100 ASNs (autonomous system numbers) out of about 35,000 account for some 60 per cent of traffic on the public Internet. Put another way, out of the 40,000 routed sites in the Internet, 30 large companies now generate and consume a disproportionate 30 per cent of all Internet traffic.

Not surprisingly, the biggest kahuna of all the big kahunas is Google, which accounts for about 6 per cent of all Internet traffic globally. The other big guys include Level3, LimeLight, Akamai and Microsoft, in that order.

Yes, the Internet is stronger – in a structural sense – than ever. But the concentration of traffic in so few hands raises troubling questions about the ability of the Internet to function when a major originator of traffic goes down or becomes infected. Simply put, Google may be too big to fail, and as we learned during the financial meltdown, that ain’t good.

I tend not to be impressed by studies conducted by vendors, but this one strikes me as quite credible. Arbor, in collaboration with University of Michigan and Merit Network, looked at two years of Internet traffic across 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. The results were based on an analysis of 2949 peering routers across nine Tier-1, 48 Tier-2, and 33 consumer and content providers in the Americas, Asia and Europe.

The implications of the results are, well, scary. In part that’s because the structure of the Internet has changed significantly in the past few years, Arbor’s chief security officer and a co-author of the study, Danny McPherson, said. Network traffic used to go up and down the food chain of transit providers, an inefficient situation, but one that did not create single points of failure.

These days, networks are far more likely to be interconnected. On one hand, these networks are more efficient and generally more robust. However, because many are interconnected – McPherson called that a “flattening of the Internet” – when a big one goes down, lots and lots of sites are affected. The results can be far-reaching.

What’s true of Google is equally true of what McPherson called “hyper-giants”. As recently as five years ago, this wasn’t the case. Internet traffic was proportionally distributed across tens of thousands of enterprise-managed websites and servers around the world. But now most content has increasingly migrated to a small number of very large hosting, cloud and content providers.

It’s not a huge stretch to conclude that a handful of providers now have enormous influence over the Internet economy, as well as a good deal of social and political power should they choose to exercise it. I’m not at all sure I like that.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments