InfiniBand Interconnect infrastructure for scalable data center Iinfrastructures

InfiniBand Interconnect infrastructure for scalable data center Iinfrastructures

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

High-performance computing, big data, Web 2.0 and search applications depend on managing, understanding and responding to massive amounts of user-generated data in real time. With more users feeding more applications and platforms, the data is no longer growing arithmetically -- it is growing exponentially. To keep up, data centers need to grow as well, both in data capacity and the speed data can be accessed and analyzed.

Scalable data centers today consist of parallel infrastructures, both in the hardware configurations (clusters of compute and storage) and in the software configuration (for example Hadoop), and require the most scalable, energy-efficient, high-performing interconnect infrastructure: InfiniBand.

INFINIBAND IN THE NEWS: Intel plans 'superchip' for high-performance computing

While Ethernet is used widely in data centers, it requires backward compatibility to decades worth of legacy equipment and its architecture is layered -- top of rack, core and aggregation. While this is a suitable match for a dedicated data center, for a fast growing and scalable compute infrastructure, this is more of a challenge.

InfiniBand was first used in the high-performance computing arena due to its performance and agility. But it isn't just InfiniBand's extreme low latency, high throughput and efficient transport (that requires little CPU power) that has made it the obvious choice for scalable data centers. Rather, it's InfiniBand's ability to accommodate unlimited-sized flat networks based on the same switch components, the capability to ensure lossless and reliable delivery of data, and it's capability of congestion management and support for shallow buffers.

The basic building blocks of the InfiniBand network are the switches (ranging from 36 to 648 ports in a single enclosure) and gateways from InfiniBand to Ethernet (10G or 40G). The InfiniBand switch fabric runs at 56Gbps, allowing flexible configurations and oversubscription in cases where the throughput to the server can be lower. The InfiniBand fabric and the applications that run on top of InfiniBand adapters are managed the same that we manage Ethernet fabrics and applications running on Ethernet NICs.

InfiniBand is a lossless fabric that does not suffer from the spanning tree problems of Ethernet. Scaling is made easy through the ability to add simple switch elements and grow the network to 40,000 server and storage end-points in a single subnet and to 2^128 (~3.4e+38) endpoints in a full fabric. InfiniBand adapters consume extremely low power of less than 0.1 watt per gigabit, and InfiniBand switches less than 0.03 watts per gigabit. [Also see: "Figuring out the data center fabric maze"]

As InfiniBand competes with Ethernet, InfiniBand pricing has kept competitive, and the higher throughput enables the lowest cost per end point.

10X performance improvement, 50% capex reduction

The combination of sub 1 microsecond latency, 56 Gigabits per second throughput, Remote Direct Memory Access (RDMA), transport offload, lossless, congestion free, and more, enables InfiniBand users to dramatically increase their application performance and reduce their capital and operational expenses.

Oracle, for example, jumped on the InfiniBand train a few years ago and has built database, cloud, in-memory and storage solutions based on InfiniBand. That decision enabled it to provide 10X and more performance improvement for its users.

Microsoft Bing Maps' decision to use InfiniBand in its data center delivered a performance gain while saving the company 50% on the capital expense versus 10G Ethernet. And EMC/GreenPlum built a large-scale Hadoop system based on InfiniBand in order to maximize the potential of the new Hadoop InfiniBand-based accelerations.

These are only few public examples. In the areas of high-performance computing, InfiniBand has become the de facto interconnect of choice. In the storage arena, more and more solutions are using InfiniBand for the backend connectivity and some for the front-end connectivity as well. IBM XIV, EMC, DataDirect, Xyratex are only few example for vendors with such solutions.

What lies ahead?

The InfiniBand technology is one of the fastest-developed technologies in the market. While the gap between major generations for Ethernet averages every 10 years, in the InfiniBand world, a new generation is being introduced every two to three years.

In 2002 it was 10Gbps, in 2005 20Gbps, in 2008 40Gbps and in 2011 56Gbps InfiniBand solutions were released. In the next two to three years we expect to see 100Gbps InfiniBand emerge with full solutions ready to use in massive installations. Each generation also includes more and better features that allow applications to run faster and compute infrastructures to be more efficient on performance, power consumption and resiliency.

In the next few years, the data center (virtualized, cloud-based etc.) will process and move data in quantities never seen before. In 10 years, these centers will contain hundreds of millions of contact-creating elements -- environmental sensors, satellite cameras, guiding cars, and every living person. The interconnect technology will be critical to the success of these data centers, and InfiniBand has a great chance to be the interconnect of choice.

Eyal Gutkind, Eli Karpilovski, Motti Beck, Todd Wilde, Tong Liu, Pak Lui and Brian Sparks with Mellanox Technologies also contributed to this article.

Read more about data center in Network World's Data Center section.

Follow Us

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Brand Post

Show Comments