Menu
Open slather in storage

Open slather in storage

Finally, storage has made the distinction between the complex and the unnecessarily convoluted. Primarily, this has occurred because the overwhelming desire for a common standard has forced hardware vendors to stop clutching proprietary blueprints to their chest like three-year-olds with a safety blanket. This degree of "openness" has also unravelled some misapprehension around interoperability and paved the way for service providers and integrators to take a more active role in the storage game.

"Interoperability is available today via independent software vendors," says Bruce Jenkins, storage expert for BMC Software. From a hardware perspective, however, interoperability is still some way off. The main glitch lies with hardware vendors, some of which have publicly stated that they will not share API-level information with their competitors. "To get around this, some vendors are creating partnerships, but these amalgamations do not remove the problem, they simply create larger factions," explains Jenkins. "Each vendor is still attempting to protect its patch or gain the upper hand in an account. Their interoperability developments will look to support their position."

For service providers, this tension has made operating in the storage space difficult. They are forced to align themselves with hardware manufacturers for technical expertise, which means sharing a larger cut of the revenue and relaxing control over customer accounts.

This alignment also affects the channel's ability to offer independent and objective advice. "Channel partners can play a major part in resolving interoperability issues as they often have relationships with multiple vendors," says Ian Selway of Hewlett-Packard. In this capacity, the channel is asked to protect the customer's investment by "backing a winner", a skill which demands in-depth knowledge of the market, players and technology, both established and emerging.

If interoperability issues are not dealt with skilfully, the outcome can include anything from vendor lock-in, uncompetitive purchase prices, poor functionality and poor integration, warns Simon Elisha, senior systems engineer for Veritas Software. "Often organisations make ‘lock, stock and barrel' commitments to vendors and then find out that the vendors' products do not even interoperate within themselves, let alone with other vendors' products," Elisha says.

"Furthermore, the capability of vendors to effectively work in the open-systems world is a critical factor. Many organisations have found their storage provider to be long on mainframe skills and light on skills in the Unix and Windows platforms. This has dramatic implications for the effectiveness of the storage solution and deployment times," he says.

"It doesn't take that much to throw a tried-and-tested interoperable solution off the rails," says Scott Drummond, program director for IBM's SAN group. "It's the difference between using a Q-Logic 8000 switch instead of a 9000 switch."

So while the channel struggles to take storage on board, where do customers go for trusted advice? Aside from peers, analysts such as Gartner, IDC, the Butler Group and Meta are playing independent counsel, in addition to the expertise and supposed neutrality of the ISVs. "Customers are now consulting with Veritas prior to their purchase decision for independent, unbiased advice on storage solutions and capabilities," says Elisha.

"There are limited avenues for true storage professional services," says Andrew Manners of HP (formerly Compaq). He says the opportunity exists for resellers to focus on storage as one of their key revenue-generating areas for the business. According to Burt Noah of Acer, 50 per cent of a storage investment will be spent in services. With these kinds of figures being bandied about, Acer has recognised it needs a storage solution to stay competitive and will launch its enterprise-level storage offering this month. The problem for the reseller and the customer, says Noah, is that as soon as a company spends $5,000 educating the systems administrator in Fibre Channel support and SANs, it increases that individual's worth by 30K a year.

Training people is the most difficult integration issue faced by companies deploying storage solutions, agrees Robert Peglar, corporate architect of XIOtech (a Seagate company). "By far, the skill-set issue is the largest hurdle. Many of the server-interconnect-storage issues are trivial once understood, but getting to the point of fully understanding the entirety of storage networking can be daunting unless specific training is obtained," Peglar says.

"This is not unlike the advent of complex IP networks over a decade ago."

Meanwhile, customers are weighing up the virtues of emerging storage technologies such as Fibre Channel versus iSCSI, and SAN versus NAS versus DAS (direct-attached storage). When these structural decisions have been made there is still the application integration to worry about. "All too frequently, sites view interoperability to mean using multiple hardware types," says BMC's Jenkins. "The real issue is not just reporting whether another vendor's switch is working but which applications are utilising which hardware and whether there is performance degradation between applications due to hardware contention."

At an ISV level, the need to "manage the managers" has become a priority, as customers demand a more consolidated view of their storage system. Each individual piece of hardware and software comes with its own disconnected monitoring tool, causing major headaches for administrators.

"Within the budget for storage, the cost of people is the largest chunk - some say 35 to 40 per cent of the IT budget concerned with storage revolves around the people required," says Peglar. "Only 20 per cent of the budget is actual hardware cost and another 15 to 20 per cent is maintenance and services fees."

Emerging storage technologies

Storage virtualisation

One of the limitations of current storage solutions is that software - operating systems and applications - uses some very old rules to figure out where to store its data. It must still identify storage locations with great specificity, usually involving a combination of network ID and hierarchical path. A company can have a vast quantity of storage on its network, but it's often split into discrete pools, each of which is managed and accessed separately.

Storage virtualisation merges these storage pools in ways that best meet application requirements. It also makes it easier to reallocate storage as needed, even across multiple file servers or SANs. With storage virtualisation, you size your storage for the needs of the entire network, not the needs of each class of application.

The benefits are clear, yet storage virtualisation is new to most IT leaders. Only a handful of products exist now, and vendors haven't done a very good job of describing the technology's benefits to prospective customers.

NDMP

NAS (network-attached storage) appliances bring the convenience of plug-and-play to networked storage. But each NAS unit is an island, so managing several units, particularly performing bandwidth-intensive backups, is a challenge.

Network Data Management Protocol (NDMP) is a cross-vendor standard for enterprise data backups. The standards group, led by Network Appliance and Legato Systems, intends to get all NAS units and tape devices speaking the same language. In this model, the backup software orchestrates a network connection between an NDMP-equipped NAS appliance and an NDMP tape library or backup server. The appliance uses NDMP to stream its data to the backup device, making efficient use of network resources. NDMP is a voluntary standard, so total product coverage is unlikely, but a critical mass of hardware and software does seem likely at this point.

NDMP is quite new and its advantages aren't yet widely recognised. Still, some companies have added tape libraries with NDMP to their mix of storage devices; a small handful plan to do so in the next 12 months.

The progress of storage technology is linked to advances in networks and I/O buses. Rising technologies such as 10 Gigabit Ethernet, PCI-X, and InfiniBand will bring about new approaches to networked storage just as Fibre Channel, Gigabit Ethernet, and 64-bit PCI did. Capacity, performance, and connectivity demands will always rise, but technology can keep up.iSCSIPerformance is a critical consideration in storage design, but it isn't the only criterion.

The SCSI parallel interface blasts data at very high speeds over short distances on behalf of a single server. Multigigabit Fibre Channel SAN switches help address SCSI's distance and single-server limitations while keeping performance high. But neither approach works over very long distances or extends access to systems that aren't directly connected to the SCSI bus or switch.iSCSI (Internet SCSI, encapsulated in IP) solves SCSI's accessibility and distance problems by leveraging existing TCP/IP infrastructures. An iSCSI host adapter turns a server's SCSI commands and data into network packets and transmits them across the company's IP network. The advantage of iSCSI is that the operating system doesn't know a network is involved - iSCSI looks like a local storage device.

Also, iSCSI is emerging as an alternative to Fibre Channel for SANs, allowing companies to deploy SANs using their existing Ethernet cabling. The IETF (Internet Engineering Task Force) has not yet ratified the iSCSI standard, but several vendors are shipping equipment designed against the working draft.

Of these three technologies, iSCSI has gained the most attention. Twenty-six per cent of survey respondents (InfoWorld, March 2002) report already using it, and another 25 per cent plan to add the protocol to their enterprise in the next 12 months. When it comes to specific storage devices, 18 per cent of readers have implemented iSCSI tape libraries, and another 11 per cent are likely to buy them this year. iSCSI's transparency plus the quick adoption of Gigabit Ethernet will drive its prominence in the enterprise.

Vendors cheer interoperability initiativesDetermined to bridge the interoperability gap and ease user frustrations, storage equipment vendors are rallying around the Distributed Management Task Force's CIM (Common Information Model).

On display last month at the biannual Storage Networking World conference in Palm Desert, California, CIM is a language and methodology for describing data management designed to replace the onerous task of acquiring the proprietary APIs needed to manage multi-vendor storage systems on a single network.

Part of the overarching WBEM (Web Based Enterprise Management) standard designed to unify the management of enterprise computing environments, CIM is being developed by the Storage Networking Industry Association and several vendors.

EMC revealed at the show it is developing products to support the CIM, expected to be shipped later this year. EMC's support for CIM and WBEM follows the release this year of its WideSky storage management package, an entrant in the storage resource management space that allows management of non-EMC hardware and features its own common access model.

"EMC is enthusiastic about CIM," says Jim Rothnie, EMC's chief technology officer. "It is not ubiquitous yet, but we hope it gets carried to adoption."

Meanwhile, other vendors including Brocade, Veritas, Sun Microsystems and Prisa Networks used the conference to demonstrate CIM's ability to connect and manage disparate storage products in a single fabric.

The industry momentum signals an extension of the CIM technology to address the specific technology challenges such as interoperability of multi-vendor equipment, including array controllers, FC (Fibre Channel) switches, and FC-to-SCSI routers.

"This is really happening," says Dona Stever, the technology centre chair of the Storage Networking Industry Association. "There is a lot of excitement; it is the promise we've been missing."

Notably absent from the event were IBM and Hitachi Data Systems. IBM kept its reasoning under wraps; however, Scott Drummond, IBM program director for SANs, says it was not about resisting the standard.

But despite the cheerleading, CIM is still very much a work in progress. Including SNMP, there are still many ways to issue commands to a storage array and it is likely the vendors will only agree on a small subset of commands.

Analysts agree that the storage industry has collectively struggled for years to develop solutions that help its customers manage data across a multi-vendor storage network, due in large part to vendor reluctance to share or open their proprietary APIs to competitors and third-party developers.

Yet, according to Stever, the widespread industry support demonstrated at the show is a sign that consensus could become a reality. Stever said that without CIM, developers face the reality of building more than 300 device drivers in order to connect different vendors' storage equipment.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments