Menu
Big Blue plotting grand SAN plan

Big Blue plotting grand SAN plan

IBM plans to marry its storage-area network (SAN) lineup to the Future I/O architecture. Future I/O is the new-generation switched fabric architecture being added to Intel-based PC servers to exploit high-speed network technologies, such as Gigabit Ethernet and Fibre Channel.

It will act as an alternative to the PCI bus architecture.

Upcoming IBM SAN announcements include:

New Fibre Channel hubs and a 16-port SAN switch.

Tivoli SAN management tools.

A WAN hardware gateway.

Big Blue made its first SAN announcement in February, and the company already offers a seven-port Fibre Channel 100Mbps hub, along with a few other storage devices.

Sources say in the next six months IBM will be rolling out more hubs that users can manage and collect network statistics from using Tivoli software.

IBM, working with partners, is also planning to offer a Fibre Channel switch that will provide about 16 ports and be Tivoli-ready. Using the switch, any server or Fibre Channel device will be able to log on to, and have dedicated links to, other storage devices.

Sources also say IBM will offer a WAN bridging device for high-speed data recovery and remote disk mirroring. The device will be able to buffer data coming from a Fibre Channel SAN and run it over slower WAN lines. It will also do conversion from Fibre Channel to frame relay or ATM.

New Tivoli management capabilities are also on the way, IBM says. There will be tools that monitor SANs for bottlenecks or failure. One software component will ensure that data running between servers or storage devices in a SAN is not lost.

The software can also detect if a server attached to a shared storage device is malfunctioning, and then lock the server to keep it from affecting other servers in the SAN.

IBM is also considering Future I/O for storage. Currently, PC servers, such as IBM's Netfinity and Compaq's ProLiant series, rely on a shared-bus PCI architecture that maximises at 532Mbps, easily causing congestion. In contrast, Future I/O uses an internal switching fabric to route around bottlenecks so there is no single failure point. And, in its first iteration expected in a year and a half, Future I/O will run traffic at 2.5Gbps.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments