Menu
I'd rather switch than route

I'd rather switch than route

They call them routing switches; Layer 3 switches or even Layer 4 switches if the marketing department gets in first. Bigger number, so it must be better, right? Customers just love this idea. It isn't that they don't like Cisco anymore, it's just that these gadgets promise to be real-life Plug and Play devices. No need to hire that wizard to set up the routing tables. Plug this switch in and it knows what to route where. Magic. When something sounds too good to be true, it usually costs too much. ARN takes a look at the rocky road from switch to router and helps negotiate the speed-humps along the wayby Susan BreidenbachWhere we've come fromNew technologies tend to issue forth from agile start-ups unencumbered by installed bases and investments in existing product lines. LAN switching is a case in point. The first commercial Ethernet switch was prototyped in the Silicon Valley garage of entrepreneur Vinod Bhardwaj, now president and CEO of ControlNet, a high-speed US networking start-up.

Bhardwaj went out on his own in 1987 with an idea for boosting the capacity of what was then pre-10Base-T Ethernet. LANs were proliferating everywhere. Ethernet's bus architecture was holding things back.

Bhardwaj was working on a three-port device that would replace Ethernet's individual t-connectors when he had a flash: it was not the speed but rather the shared nature of Ethernet that was the problem. The three-port device simplified wiring but wouldn't really scale.

The answer was a product that provided dedicated connections to each station and could eventually include an uplink to higher speed backbones.

When Bhardwaj pitched his invention to network companies, he was regarded, like many pioneers before him, as being afflicted with moonstruck madness. "They said, 'we've moved on to routing, and you're sending us back to bridging,''' he recalled.

Rejected but resolute, Bhardwaj left the established players to their 10Base-T committee battles and co-founded his own company called Kalpana.

The first order of business was to get rid of the "b-word", so the device - really the aggregation of a bunch of bridges - was designated an Ethernet switch. The product, dubbed the EtherSwitch, was encased in a traditional four-cornered box, but Kalpana marketers represented it on network diagrams as circular just to make it look different.

"FDDI and ATM were coming out at the same time that we were releasing this 'fancy bridge', so we promoted it as just tactical," said Larry Blair, vice president of marketing at Redback Networks. Tactical indeed. It turned out to be the beginning of a paradigm shift that would enable frame-based networking to flourish into the 21st century. At the time, however, finding believers wasn't easy.

A smart move

One well-known network industry executive was offered worldwide marketing rights to the EtherSwitch for $US250,000. But the executive in question turned down the opportunity three times.

The Kalpana team was left to its own devices and in 1990 rolled out the first EtherSwitch. It had seven 10Mbps ports and sold for $US11,500, - about $US1650 per port. However, the price was less of an obstacle to prospective customers than the configuration changes to their networks.

"They were concerned about reliability and introducing a single point of failure," Bhardwaj remembered. "But once we got in the door, we could demonstrate a very visible improvement to network performance."

In fact, early product reviews characterised the EtherSwitch's speed as "stunning", and sales started to snowball. By 1992, vendors that had scoffed at the concept of Ethernet switching were lining up for OEM deals. Their customers were demanding "Kalpana-like" technology.

Kalpana passed into history in 1994 when it was swallowed up by router giant Cisco. Ever the entrepreneur, Bhardwaj had left the company three months previously so he wouldn't have to sign a non-compete agreement.

What are his thoughts on the revolution his EtherSwitch has wrought?

"Switching has progressed more than I originally thought, but it has also gotten a lot more complicated," he said. "Switches were throughput devices that were supposed to replace hubs, not routers. You keep them simple, simple, simple. Throw bandwidth at the problem, not complexity."

Where we're going

How much faster and cheaper can data switching get? In eight years, switch throughput has gone from 150,000 packet/sec at Layer 2 to more than 50 million packet/sec at Layer 3, and there is no sign that the electronics driving these advances are running out of steam. In fact, recent announcements of terabit-speed switches indicate the rate of improvement may be accelerating.

Every time it starts to look as if the industry might have to go to optical technology in order to increase capacity, electronics makes another leap. The new terabit switches don't even incorporate the latest silicon technology. They use 0.25-micron silicon -1/600th the width of a human hair - and 0.18-micron technology is in the works. Such linear decreases in size translate into a geometric progression in the number of circuits that can be squeezed onto a chip. And the closer together the circuits, the faster they can operate.

"I tend to think there isn't a limit," said Diane Myers, a senior analyst who follows the semiconductor industry for In-Stat.

Getting there

Six or seven years ago, Application Specific Integrated Circuits (ASICs) ran at 20MHz and had 20,000 gates, or groups of transistors that implement some logic. Today, 0.25-micron technology has pushed those numbers to 100MHz and 500,000 gates, and 0.15- micron silicon should bump them to 400MHz and two million gates.

This is assuming chip designers use standard libraries that have been developed at the gate level. While using such pre-packaged logic is a lot cheaper than starting from scratch, it wastes a lot of space.

The capacity of the basic materials is just one aspect of switch performance. Throughput can also be boosted dramatically through architectural innovations.

For example, typical shared-memory switches have trouble scaling beyond 30Gbps because the ports have to access the memory through a bus. Packet Engines has eliminated the bus and come up with what it calls parallel access shared memory.

If and when the electronics wizards run out of tricks, optical technology presents some possibilities. A 1988 patent describes a 30Gbps photonic-array backplane. But that technology emanated from the defence industry, in which cheque books tend to be a bit larger than the ones available to Gigabit Ethernet start-ups.

This feature was edited by Ian Yates


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments