Menu
PCI Express pumps up performance

PCI Express pumps up performance

In the past decade, PCI has served as the dominant I/O architecture for PCs and servers, carrying data generated by microprocessors, network adapters, graphics cards and other subsystems to which it is connected. However, as the speed and capabilities of computing components increase, PCI's bandwidth limitations and the inefficiencies of its parallel architecture increasingly have become bottlenecks to system performance.

PCI is a unidirectional parallel bus architecture in which multiple adapters must contend for available bus bandwidth. Although performance of the PCI interface has been improved over the years, problems with signal skew (when bits of data arrive at their destination too late), signal routing and the inability to lower the voltage or increase the frequency, strongly indicate that the architecture is running out of steam. Additional attempts to improve its performance would be costly and impractical. In response, a group of vendors, including some of the largest and most successful system developers in the industry, unveiled an I/O architecture dubbed PCI Express (initially called Third Generation I/O, or 3GIO).

PCI Express is a point-to-point switching architecture that creates high-speed, bidirectional links between a CPU and system I/O (the switch is connected to the CPU by a host bridge). Each of these links can encompass one or more "lanes" comprising four wires - two for transmitting data and two for receiving data. The design of these lanes enables the use of lower voltages (resulting in lower power usage), reduces electromagnetic emissions, eliminates signal skew, lowers costs through simpler design and generally improves performance.

In its initial implementation, PCI Express can yield transfer speeds of 2.5G bit/sec in each direction, on each lane. By contrast, the version of the PCI architecture that is most common today, PCI-X 1.0, offers 1G bit/sec in throughput. PCI Express cards are available in four- or eight-lane configurations (called x4 and x8). An x4 PCI Express card can provide as much as 20G bit/sec in throughput, while an x8 PCI Express card can offer up to 40G bit/sec in throughput.

Earlier attempts to create a new PCI architecture failed in part because they required so many changes to the system and application software. Drivers, utilities and management applications all would have to be rewritten. PCI Express developers removed the dependency on new operating system support, letting PCI-compatible drivers and applications run unchanged on PCI Express hardware.

A bus for the future

Developers are working on increasing the scalability of PCI Express. While current server and desktop systems support PCI Express adapters and graphics cards with up to eight lanes (x8), the architecture will support as many as 32 lanes (x32) in the future.

The first Fibre Channel host bus adapters were designed to support four lanes instead of eight lanes, in part because server developers had designed their systems with four-lane slots. As even more bandwidth is required, implementing an eight-lane design potentially could double the performance, provided there were no other bottlenecks in the system.

This scalability, along with the expected doubling of the speed of each lane to 5G bit/sec, should keep PCI Express a viable solution for designers for the foreseeable future.

PCI Express is a significant improvement over PCI and is well on its way to becoming the new standard for PCs, servers and more. Not only can it lower costs and improve reliability, but it also significantly can improve performance. Applications such as music and videostreaming, video on demand, VoIP and data storage will benefit from these improvements.

Watson is the product manager responsible for PCI Express and blade products at Emulex. He can be reached at lovest.watson@emulex.com.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments