PC servers come together to assault glasshouse

PC servers come together to assault glasshouse

With PC servers beginning to play a more strategic role in terms of running mission-critical applications, we can expect to see a wave of new server architectures, starting in the third quarter, that embody technologies typically associated with high-end systems.

Leading that charge are PC vendors who are adopting clustering technology in hopes of giving Windows NT, Unix, and OS/2 applications many of the same capabilities found on mainframes that are locked away in forbidding computer data centres.

By linking PC servers based primarily on quad-processor Pentium Pro boards from Intel, PC vendors are promising IS managers the best of both worlds: the low cost of an off-the-shelf approach to system design coupled with the scalability, security, and reliability of mainframes and minicomputers.

If everything works, the result will be a low-cost, easy-to-manage platform with the horsepower and capacity to handle mission-critical and data-intensive applications.

"If you can use multiple, less expensive servers that can be added through clustering as needs increase, you can control costs," said one IS manager.

Applications that would benefit from high availability and a series of inexpensive nodes linked by a high-speed interconnect include on-line transaction processing, intranet solutions, and datawarehousing.

This has just about every major vendor from Compaq to mainframe providers such as Tandem racing to deliver low-end clusters.

"People are attracted by the idea of taking a lot of low-cost components and linking them through a high-speed network to get large systems," said Richard Hellyer, systems marketing manager at Tandem, in California. "It would mean that you could really realise an alternative mainframe approach."

Fail-over support

The most basic form of clustering - linking two PC servers to provide for application fail-over - is due to start arriving this summer from Digital Equipment. Compaq and Hewlett-Packard, meanwhile, intend to introduce separate solutions by year's end.

If one node fails on these systems, a back- up node can pick up its data, application processing, and clients, providing high system availability.

Microsoft intends to provide a set of APIs for NT that would support this version of clustering by year's end. Those APIs will then be extended in 1997 to provide support for across nodes in an NT cluster later next year.

Other x86-based server OSes are also expected to address clustering. IBM's OS/2, for example, already supports fail-over, and the company plans to add support for multiple nodes later. Meanwhile, many Unix vendors are looking at lowering the cost of clustering to counter NT and OS/2 and Novell expects to add clustering support to NetWare next year.

More sophisticated solutions will use a host of emerging technologies, such as high-speed interconnects based on either Fast Ethernet or proprietary designs and sophisticated storage subsystems, to let IS managers build massive clusters by connecting a series of four-processor Pentium Pro servers.

Database vendors are already poised to deliver scalable cluster solutions. Informix Software plans to deliver both an NT and Unix version of its Online Extended Parallel Server (XPS) by July. Oracle expects to release its Oracle Parallel Server (OPS) for NT by year's end, and other database companies are also working to support clustering.

Old dog, new tricks

The concept of clustering isn't new. It's been used for years by high-end system vendors - ranging from Digital's VAX clusters to Unisys's U Clusters - to provide system availability, scalability, and easier management. Multiple nodes - which can range from a uniprocessor server to the largest symmetrical multiprocessing (SMP) system on the market - run their own version of the OS but are linked to create a large, virtual system that is managed as a single logical image.

This capability adds a whole new dimension to PC servers.

"There are inherent weaknesses in the x86 SMP platform. Every time you add a processor you lose inherent capacity," said Jim Johnson, chairman of Standish Group International. "By clustering SMP servers, vendors are trying to solve this problem."

But even though vendors will be using similar technologies and software, PC clustering will come in a variety of flavours.

Clustering at the PC server level is due in great part to Intel's Standard High Volume four-processor Pentium Pro boards. Due out in mid-May, the boards will enable vendors to quickly deliver cost-effective SMP boxes.

But some vendors want to start with a larger SMP building block and are working on eight-processor boards.

Although some vendors, such as HP, are expected to wait for companies such as Microsoft to implement support for these boards in their operating systems, other vendors are forging ahead.

Digital will deliver Digital Clusters for Windows NT for its Prioris line of servers this summer to support application fail-over, said Bob Guilbert, Digital's NT clusters marketing manager, in Massachusetts.

Digital Clusters for NT won't use proprietary components, because it is based on a standard shared SCSI storage system and uses Ethernet or FDDI for connecting the servers, Guilbert added. But Digital will provide a special driver to manage disk access, he said.

The company has plans to support Microsoft's WolfPack APIs when they are released in 1997 to support applications across clustered nodes, Guilbert noted.

Compaq already offers fail-over between two ProLiant NT servers for Oracle and, as of last week, for Microsoft's SQL Server.

The company will extend this functionality by using Tandem's ServerNet technology to cluster Pentium Pro servers. Two-node clusters are due by October, with larger clusters due in 1997.

Oracle's OPS for Windows NT is expected by year's end for the Compaq platform, said Steve Bower, senior director of development for Oracle's workgroup solutions division.

OPS for NT will allow one instance of a database to be run across two servers; both servers will access the same database and I/O subsystem through a shared-disk implementation. The company is working with storage vendors on development of subsystems that can support multiple nodes. Oracle is also releasing APIs for interconnect parameters and managing clusters.

"We don't need the WolfPack APIs, as we have our own implementation of a distributed lock manager [DLM]," Bower said. The DLM communicates over a fast connection between the servers to maintain database concurrency, Bower added.

Informix plans to beat Oracle to market by delivering its Online XPS database for both Windows NT and Unix by July, said Tim Shetler, vice-president of product management. Informix claims its database will offer better scalability than Oracle's because the database is partitioned and distributed among nodes, which are attached to independent standard storage subsystems. This "shared nothing" environment should eliminate I/O bottlenecks.

Informix has developed its own API for interconnecting servers, but would not say which hardware architectures will be supported when the database is released.

But even after these solutions arrive, vendors will still face considerable challenges as they aim to provide enterprise-level platforms based on the x86 architecture.

Though Informix and Oracle are moving forward with their own development of NT cluster support, other ISVs may not jump ahead of Microsoft's development schedule, leaving the hardware solutions short of crucial software.

Although NT is becoming a popular choice at corporate sites, it still hits scalability walls, according to most vendors, and doesn't scale well past four processors.

Until scalability improves, convincing IS shops that NT can meet their needs could be difficult.

"NT is an OS that still has a lot of overhead that goes with it. We've spent a lot of money on high-performance computing, so why go with NT, which gives less performance?" said Daniel Jaye, chief technology officer at CMG Direct Interactive.

Some analysts also said that the PC hardware architecture running most x86 OSes is not yet sophisticated enough.

"If you take a mainframe, there are hundreds of cross-checks that the system does every time you send a bit," Standish's Johnson said. "PC hardware lacks that integrity, and the operating systems are still not mission-critical quality, except OS/2."

Even for users confident of the X86 architecture, other obstacles will still need to be addressed. Shared-disk solutions will require advanced storage subsystems.

"Historically, shared disk has been fairly limited in terms of scaling because of I/O interconnection problems," said Justin Rattner, an Intel fellow and director of the server architecture lab. "If you use SCSI connections, you pretty much run out of connectivity before you can connect many nodes."

A solution to such I/O troubles may emerge through use of 100Mbit/sec fibre channel or the 80Mbit/sec Serial Storage Architecture on the Windows NT platform.

Another key issue in maintaining low costs and providing systems based on commodity components will be standardisation on an interconnect technology. Vendors are planning to implement a variety of technologies - from Fast Ethernet to FDDI to ServerNet - that may slow application development and affect the "openness" of systems.

But although there are obstacles to overcome, the savings x86 clusters will deliver may sway critics.

"The name of the game is price performance," CMG's Jaye said. "If the hardware for NT is half the price of Unix, that will make up for it being inherently slower".

Follow Us

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments