Menu
Interview: Autonomic computing - are we there yet?

Interview: Autonomic computing - are we there yet?

For the past five years, Dave Bartlett has been IBM's chief authority guiding large enterprises on how best to use self-managing technologies and standards. Today, as vice-president of industry solutions, Bartlett is charged with using his autonomic-computing expertise to create highly repeatable, end-to-end packages that any company in a vertical market segment could implement easily. Here he delivers a status report to Beth Schultz about autonomic computing, a foundational New Data Center concept.

Has autonomic computing achieved its promise yet?

If you look back five years, the big concern was that IT systems were too complex to manage and maintain. That's still where we are today, and the autonomic standard of self-managing technology is still the solution. It's the one initiative that cuts across multiple customer platforms and technologies.

What we didn't realise was how much time and effort it would take to have autonomic computing take hold throughout the industry and truly transform the way we work. I can point to many individual examples of success, but autonomic computing still is not pervasive across the industry.

Have you changed your approach because autonomic computing technologies aren't as widely used as you thought they would be by now?

We knew from the beginning that solving the problem of complexity went beyond IBM's scope. But in working with large enterprises, we have learned how much collaboration is needed on the standards and technologies that will bring about this sea change. And so we have built what I call the autonomic ecosystem. This is about getting participation, not just from other software vendors but from systems integrators, as well as working with channel partners and resellers. Part of the ecosystem also heavily involves research, and now we are working with other research organisations, corporations and universities. If you're going to address the full set of what customers are faced with, that's going to require a certain amount of innovation, as well as group-level support for this transformation. We have moved from yesterday's concept of innovation being a closed, proprietary thing to something that really needs to be open. And when I say open, I'm talking about very open in a way I haven't seen in this industry, in IBM, before. This is contributing to open source, being very open in the standards work with traditional competitors, being very open on bringing our research-and-development resources right to the customer site.

I've read IBM has implemented more than 500 self-managing features in 75 distinct products. What are the coolest?

At the most fundamental level, an example is the airbag technology in the ThinkPad [notebooks]. One of the things we focus on is self-protection. Really the biggest danger to a laptop is dropping it and then losing data on the hard drive if the head crashes. So we put in a chip that can sense a change in velocity and pull the write-read-write heads off of the drives and thus protect the data.

Another example is the eFuse technology in the Power5 chips. It can change circuit design based on environmental attributes, such as voltage or temperature. This means performance problems can be handled right in the hardware without human intervention.

The more-complex cool examples come at the top of the stack, leveraging the autonomic capabilities of numerous products. We focus on workload management, provisioning and IT optimisation, building these capabilities into products, such as the latest WebSphere, the DB2 Viper release, and Tivoli provisioning and orchestration software. This reduces the need to provision hardware to meet the peak core requirements by enabling the real-time provisioning of what's needed, improving availability.

How has autonomic computing shown its enterprise value?

One example is how Guardian Life Insurance [in New York] applied autonomic technology for spotting problems in business applications. Before IT administrators at Guardian Life deploy new applications, they simulate the deployment in a test environment. During the test, the various applications, servers and network devices naturally generate error logs when they experience hardware and software failures. The challenge was that these error logs appeared in different formats, which made it hard to identify the source of the problems. The time and effort it took to isolate and fix problems was costing the company money. Using IBM's autonomic problem-determination technology, Guardian Life systems now detect, analyse and diagnose problems without human intervention, so repair is faster and easier. IBM's technology also allowed Guardian Life to centralise the scattered error logs into a single format so they can be viewed, analysed and resolved easily. Guardian Life has said it has cut the time required to fix problems by 90 per cent.

Another example comes from within IBM, as the host of the Grand Slam tennis tournaments. We're constantly challenged with the cost of running these events at the same time as delivering new features and capabilities. And as with any sporting event, we are faced with incredible peaks of traffic. So how do you manage that reasonably without provisioning for peak performance? We worked with the hosting group to transform the infrastructure into more of a virtualised infrastructure. We have had no outages, and that's with a 130 per cent increase in website visits over time while achieving a 70 per cent reduction in cost per visit - an overall 35 per cent reduction in annual hosting costs since 2001. You can think of this as an industry solution for the entertainment or sports industry involving our pSeries servers and a number of Tivoli software products, as well as WebSphere and DB2 - multiple products working together, integrated based on autonomic computing standards and technology.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments