Sun and Oracle heads, Scott McNealy and Larry Ellison, have outlined a shared vision of grid computing at this year’s OracleWorld conference. During his keynote, McNealy said Sun’s and Oracle’s strategies complimented each other in key areas such as enterprise grid computing, security, high availability and clustering.
McNealy said the companies were teaming to help evolve grid computing from simply scientific research to a commercial enterprise deployment.
Oracle’s 10g database product took advantage of the data centre environment Sun was building with its N1 strategy, McNealy said.
According to Sun, its N1 architecture comprises foundation resources, virtualisation, provisioning, policy and automation, and monitoring. By making a data centre work like a single system, N1 turned previously siloed resources into a pool of virtual resources. Services could be mapped across this resource pool and customers could create policy-driven services and assign priority to critical services.
Sun and Oracle plan to increase overall system performance by utilising new functionality in Oracle’s 10g database, McNealy said.
He lamented the complexities of data centres that required so many employees to keep things running so that organisations could deliver their services every day.
McNealy likened the problem to someone deciding to fly from one city to another after first handcrafting a “jalopy airplane” — buying all the parts and custom-building the aircraft.
Today’s data centres are built like that, he said.
“No two are alike — they’re not even close … they’re like different species,” McNealy said.
In the case of airplanes, while it might be cheaper to buy all the parts separately, the total cost of delivery, by the time they were assembled and tested, would be higher than buying ready-built machines, he said.
The same applied to data centres.
Sun’s solution was to offer Intel-based servers running Linux or Solaris, which, according to the firm, are ideal for clustering together in a customer-ready setup that could be used for grid computing purposes. Those servers compete with Dell Computer, which peddles its own Intel-based servers running Linux or Windows as another low-cost hardware option that fits in with the grid model.
Going the customer-ready Sun route — running system software on the firm’s x86 servers — would result in a better deal for customers than going with a components-focused Dell solution, McNealy suggested.
“We’re not going to (provide the same pricing) on Dell equipment,” he said.
Sun would still sell its server software on the Dell servers, but “it’ll be a little more expensive”, McNealy said.
Frank Lauritzen, manager of database operations for the Meteorological Service of Canada, said Sun’s was one offering that seemed to be “much cheaper … than what the mainframe used to offer”.
He said there were always organisations that would benefit from buying a complete customer-ready server software and hardware package — it was just a matter of deciding when that choice was appropriate for an organisation.
“It’s a tough call — I’ve made calls both ways. Sometimes we’ve done things ourselves and then looked at how things have gone and said we should have left it up to someone else — other times it’s been the other way around,” Lauritzen said.
In general, he said he had been seeing a trend toward complete services, which “at times is valid”.
Oracle wants to help organisations address one of the major challenges that comes with grid setups: creating the illusion that all those machines are one. That was one of the messages in Oracle CEO, Larry Ellison’s, keynote address.
Ellison said data centre employees working for companies that had switched to the grid model had a lot more work on their hands managing all those servers. For example, they now had to install software and patches on 100 to 200 two-processor machines, whereas before they only had to worry about five or six larger servers.
This management problem had the potential to defeat the purpose of the grid.
“If there are no management or provisioning tools, whatever savings we get in hardware will be lost in labour,” Ellison said.
Oracle’s Grid Control software was designed to help customers monitor and manage entire Oracle grid infrastructures, from databases and applications to storage, within a single console, Ellison said.
The grid management product would provide users with advice on how to plan for capacity, availability and performance needs within a grid.
The software could make comparisons for users between different database servers in the grid and reveal how they’re different, Ellison said. It could tell the system to automatically load balance and tune itself to adapt resource usage to patterns.
The software features a “control repository” that containd performance, availability and configuration data about the enterprise, as well as a set of centralised management capabilities that transform and configure data into valuable information, Oracle said.
Using Grid Control, administrators could reduce the complexity of managing multiple servers in a cluster and automate the management of computing resources.
Grid computing was the biggest wave to hit the IT industry since the introduction of IBM’s 360 mainframe in 1964, according to Ellison, who reminisced about the quest many hardware companies had embarked on these last 40 years: building bigger servers to get more computing capacity.
There’s was a problem with that approach, Ellison said.
Once you have the largest server there is, you’re done; there’s no place to go (to get more capacity) if you have a single-machine architecture”, he said.
Many organisations were now at a point where their applications had outgrown their machines, he said.
Ellison reiterated the downsides to large-server setups, including the idea that they were very expensive and that once capacity in one large server was maxed out, the customer had to “throw it out and spend millions” again on the next biggest machine that came out.
Perhaps the “worst of the Achilles’ heels” of large server environments was the problem of a single point of failure, he said — that when one machine went down, the users went down with it.
Grid computing addresses both these issues by requiring low-cost four- or two-processor machines that users could just plug in to increase their capacity as they needed it.
The fault tolerance and load-balancing capabilities of grids meant that if one small server in the grid went down, “users wouldn’t see any interruptions in their services at all,” he said.