Menu
Peering into the next wave of Internet apps

Peering into the next wave of Internet apps

XDegrees founder and CTO Dan Teodosiu talks about the next generation of Web services and the impact it will have on enterprise computing applications.

IDG: What is XDegrees trying to do?

Teodosiu: We're enabling completely new classes of network applications. If you look at the browser today, it's pretty much unidirectional. Your machine with a browser can be just a client for information. With our platform, we essentially turn the machine into a bidirectional agent - a system that can both suck in information from other sites, but also publish information to other machines on the Internet. This is one of the parts of the system we are building, that provides the ability to control exactly how, where and when resources are shared and can be accessed from other machines.

Furthermore, we provide you with a way to manage equivalent copies of those resources without having to worry about how many copies there are, where they are located, or where you can get the best copy in terms of access speed and network hops.

How will this affect enterprise computing applications?

There's a new generation of applications in store for the enterprise. We'll be able to leverage [them] on this kind of framework to provide better content management and a more unified way of storing and disseminating information produced by each person.

Today we see competing computing models emerging. What is your take on the different approaches to Internet computing?

If you look at any kind of distributed system, you have this continuum between fully distributed and fully centralised. If you look at the Web nowadays, it's fully centralised. If you look at something like Gnutella, it's fully distributed.

What is driving the move to peer-to-peer computing?

All the major advances in distributed systems have been brought about by technological changes. What we've seen in the past few years is a dramatic increase in memory size, disk space and processor speed on desktop machines. We've also seen a dramatic improvement in connectivity. All those technological changes are actually enabling this quantum leap. That's why you're seeing very successful applications such as Napster.

Why did you need to build your offering as an Internet service, as opposed to an application that a company would buy?

For any of those schemes to work well, you need some kind of basic infrastructure that you can rely on when building applications. Any single Web server basically relies on DNS (domain name servers). That's the mechanism users rely on to find that server. Similar to DNS, we provide a set of back-end services that allow users to find various resources that are located on other peer machines.

How would an application use this service?

We have a prototype that uses Microsoft Outlook. If you look at the way attachments are sent nowadays through e-mail, they are basically attended through the message that is sent, and then you send this whole big chunk that contains the message and the attachment. This is fine for small attachments, but if you're sending very large files it may quickly become a problem.

In fact, for most users of Web-based e-mail, the state of the art is so restrictive that you can't even send a song, for instance, to another user because it will fill up the inbox. Another way to do this is instead of sending the attachment with the message, just send a reference to the attachment when you send the e-mail. Then once the e-mail has reached the destination, allow the recipient of the message to actually fetch the attachment directly from your machine in a peer-to-peer fashion by just clicking on that reference.

How did you develop this concept?

We wanted the resources at the edge to be capable of taking a more active role on the Web. So we needed a mechanism to address those resources, manage equivalent copies of content and activate services. Furthermore, we needed to do all of this in a secure fashion so that users could control exactly when and with whom they would share their resources.

The second realisation was there's basically two ways to do this. One is to try to build a completely separate universe by defining a completely new naming scheme for peer resources, then implement that naming scheme and all the tools around it. But that's not the most appealing way because there is already a humongous number of tools out there to deal with the Web, and people won't throw those tools away. So to build something like this, it had to be compatible with the existing addressing scheme that's in ubiquitous use with the Web. So it had to use URLs and it had to work with browsers seamlessly.

The third realisation was that one cannot treat the resources at the edges as one treats servers, because most servers in this world sit at co-location facilities or are somehow professionally managed. The basic assumption is that they are up most of the time. For peer resources, this assumption does not hold because user machines just come and go. People switch them off, then switch them back on later. The system had to be designed from the ground up to cope with this kind of unreliable behaviour at the edge.

Where are we in the development of the Internet?

I think there's still a long way to go in the evolution of the Web. There are a couple of big things that are about to happen. One of them is getting away from the client/server model and involving the peers or the machines at the edges of the Internet in a more active role on the Web.

Dan Teodosiu - XDegrees

Age: 34.

Title: CTO.

Biggest successes: Resolved a long-standing multiprocessor reliability issue, in which the solution keeps the good parts of a shared-memory box running despite significant hardware or operating system failures.

Key challenges: Rewiring the Web to foster the development of new kinds of network applications and to improve communication.

Personal note: Likes to team with friends for European treks.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments