Intel's brave new world of 64-bit server architectures

Intel's brave new world of 64-bit server architecturesAs the director of Intel's Server Architecture Labs Justin Rattner is one of the most influential executives in the industry. As more and more vendors rally around Intel architectures for the server, Rattner is making decisions today that will take effect throughout the next century. Rattner talked with Michael Vizard, Ephraim Schwartz, and Andy Santoni about the key issues that need to be addressed to resolve conflicting computing models, increase overall performance, and hasten the adoption of 64-bit server architecturesIDG: Is it desirable, or possible, to have a server that automatically recognises what kind of device the client is, and what kind of computer model it's expecting, and can then respond accordingly?

Rattner: Yes and no. You have to know whether you're talking to a device that is strictly a terminal emulator or if it's a device that you can download a program into. You have to make some decisions about what it can do, what the client is capable of. This is why we see a growing disenchantment with very lean clients, because they are restrictive. Admittedly, the sub-$US1000 PC is very attractive to businesses that might have been considering lean clients in one shape or another.

So what can be done to make this easier for customers to live with?

A lot of it has to do with middleware architecture. I think there is a balance to its architecture that gives application developers the freedom to implement an application at any level in the hierarchy. It doesn't force you to be server-centric. I want to be able to do transactions on my client, and I can think of lots of applications where client-routed transactions make complete sense, particularly in the age of the Internet with VPN (virtual private network) technology. I create a business, and is my client a server or is my server a client? I'm not really sure anymore.

What you'll see us talking more about in the months ahead is how to create middleware structures that allow for great flexibility in where you deploy the power, and how you spread the application across the hierarchy, so you don't get forced into one extreme.

Are you going to build this middleware framework?

I think our next step is to demonstrate the concepts. We're very busy with that already. We're doing some demos internally and we'll be taking it to the public in the next few months. And then we'll be working with the middleware industry. What we want to do is be above the object plumbing. We're neutral to whether you're in a COM+ (component object model) or a JavaBeans environment.

From our point of view, there's been too much concern about that. That's really a technology, not a solution to anything. What we want to do is provide a consistent set of middleware services that operates and interoperates across both of those object frameworks, which are likely to co-exist for some time.

What exactly is a balanced architecture from Intel's point of view?

A balanced architecture means that clients are not arbitrarily excluded from performing significant roles in the design of an application, which was one of our big concerns when people started taking what we believe were fairly extreme views on how applications should be deployed in the era of the Internet.

A number of these models that have been advocated are fairly narrowly defined. There's no reason why they can't be accommodated within the balanced architecture.

In terms of actually implementing these architectures, Intel is now pushing developers to adopt 64-bit servers. What types of applications are going to be leading edge drivers for implementing 64-bit servers?

Databases already put a tremendous pressure on address space, primarily to buffer the disk storage, and provide very large caching of the disks. So that's right at the leading edge.

The VLM (virtual loadable module) support in Windows NT is driven by that. I think maybe the next thing is going to be completely memory-resident databases. If you look at the storage capacities on the machine, some significant fraction of all of the databases on the planet could actually be put into main storage. And once that becomes accepted, I think database systems may actually evolve so there's less of a presumption that memory is just a big cache from storage.

When the whole database is in memory, the kinds of things you can do become much more interesting because now you're not pushing things on and off disks; you are just moving pointers around.

So we'll see near real-time responses?

Yes. I think that's going to be one of the major pulls. And not just for databases. I think that business applications -- the Baans, the PeopleSofts, the SAPs -- are interested in exploiting very large physical memories. And you've got to have address space to do that.

There's a perception out there that Intel is the vehicle through which Microsoft is going to drive Windows NT and throw Unix out of the enterprise. What is Intel's take on that whole NT/Unix discussion?

I think you should judge by our moves with SunSoft on Solaris for Merced. We're working with all the OS vendors. There's only one NT vendor, so that simplifies that problem . . . And they're moving very aggressively. They're already hosting developer events for people that want to get working on the 64-bit NT products. But, the IA-64 was viewed as a sufficiently important development that Sun was motivated to enter into an agreement there. So we're neutral in the architecture.

The IA-64 is very attractive to those OEMs who've been exclusively dependent on either a proprietary RISC processor or some broadly available RISC processor. IA-64 represents a very attractive transition point for them. And most of them are Unix suppliers, so Unix is going to be a very important system on the IA-64 platform.