Twice in the past week, I heard people for whom I have a great deal of respect conjecture that within two years Netscape will no longer be a significant player in this industry. Both times I reluctantly agreed, more to avoid having to think about it than because I really believe it.
Then, while pondering the rumoured death of client/server, the answer came to me. Netscape's longevity doesn't depend on its Web browser because client/server isn't dead. All right, work with me a moment, and I'll tie the two thoughts together.
You see, the original promise of client/server was to leverage the processing power of the client against the work typically done on the server. The trouble is, the client is often the worst possible place to put server processes. Developers who fell into this trap should have known better. But, forgive them, they knew not what they were doing. PowerBuilder, SQL Windows, and other object-oriented, extensible, multiplatform, distributed, integrated, ODBC-compliant, and OLE-enabled (did I leave anything out?) development tools were too alluring.
Clever PC folk that we are, we took things such as field validation and complex business rules off the database server and moved them out to the hundreds or thousands of powerful desktops.
Then, months after such applications were deployed, a few business rules changed. As a result, the developers, CIOs, or whoever was responsible for the change, promptly decided to look into chicken farming as a career.
Client/server itself didn't fail. The developers and the ISVs abused it. And they didn't take the technology far enough.
For instance, many client/server applications that fail could work very well when built on a three-tier architecture. In three-tier client/server, the business intelligence is kept off the clients and the database server and placed on the second tier. That tier can be managed centrally, obviating the problems of putting business logic on the client. Now here's where it gets interesting. Plot the course from raw client/server to three-tier client/server until it crosses the Internet. Part way, the middle tier multiplies into distributed components that cooperate with one another.
Think of the advantages this adds to the client/server model. Someone in New Jersey places an order for an Alludium Q32 Explosive Space Modulator. The client request kicks off a query to the appropriate database server in Wisconsin. The server sees the part is out of stock and tells the middleware to send a back-order request to another piece of middleware on a server in Kentucky. Bada-boom, bada-bing - you're using distributed objects.
Deja vu all over again
Now let's start this thought process over again with Web technology. The promise of the World Wide Web is to leverage the Internet to create a way of providing anyone interactive, transparent access to a global library of information.
Suddenly, the market becomes flooded with flashy Web page editors and Java development kits. Before you know it, we're linking Web servers to databases and creating interactive front ends to the world. The potential value of a Web application has been discovered.
So far, all this magnificent work is being brought to you courtesy of HTTP and HTML. But one of the problems of making the Web interactive is HTTP, the protocol that started the Web boom. HTTP is stateless - that is, you connect, get data, then disconnect. This makes for a brain-dead Web application. When you fill out a data form on a Web page, no business rules can be applied until you submit the form.
Java can make up for some of what you lose in a stateless protocol. You can code your validation checks and rules in Java and ship them along with the Web page. But putting the business logic in a Java-enhanced Web page re-creates the scenario that kept client/server from reaching its potential.
The best solution, therefore, is to create a hybrid component model, such as Java or OpenDoc, over a CORBA-compliant ORB such as Internet Inter-ORB Protocol (IIOP). The Java or OpenDoc application runs at the browser. Regardless, it communicates in real time with intelligent middleware (distributed objects) via IIOP. The middleware then interacts with what-ever services it needs (database, e-mail, other business objects, etc). In other words, your application gets the benefits of multi-tier architecture on the Web.
This is essentially the direction Netscape is taking the technology, and it is what should keep it on top. Besides making Netscape the visible leader, the strategic market implications can be far-reaching if Netscape succeeds.
Netscape's Marc Andreessen speculates that HTTP will be replaced by IIOP. I would go one step further and say Web browsers will be replaced by or become intelligent containers of networked objects. If that's true, then the battle isn't between browsers. It is between objects. And all this fuss about whether Microsoft or Netscape will win the browser wars could be much ado about nothing.