AI technology comes of age

AI technology comes of age

"Stair, please fetch the stapler from the lab," says the man seated at a conference room table. The Stanford Artificial Intelligence Robot, standing nearby, replies in a nasal monotone, "I will get the stapler for you."

Stair pivots and wheels into the adjacent lab, avoiding a number of obstacles on the way. Its stereoscopic camera eyes swivel back and forth, taking in the contents of the room. It seems to think for a moment, then approaches a table for a closer look at an oblong metallic object. Its articulated arm reaches out, swivels here and there, and then gently picks up the stapler with long, rubber-clad fingers. It heads back to the conference room.

"Here is your stapler," says Stair, handing it to the man. "Have a nice day."

These are indeed nice days for artificial intelligence researchers. While Stair's performance might not seem much better than that of a dog fetching the newspaper, it's a technological tour de force unimaginable just a few years ago.

Indeed, Stair represents a new wave of AI, one that integrates learning, vision, navigation, manipulation, planning, reasoning, speech and natural-language processing. It also marks a transition of AI from narrow, carefully defined domains to real-world situations in which systems learn to deal with complex data and adapt to uncertainty.

AI has more or less followed the "hype cycle" popularized by Gartner: Technologies perk along in the shadows for a few years, then burst on the scene in a blaze of hype. Then they fall into disrepute when they fail to deliver on extravagant promises, until they eventually rise to a level of solid accomplishment and acceptance.

AI has its roots in the late 1950s but came to prominence in the "expert systems" of the 1980s. In those systems, experts -- chess champions, for example -- were interviewed, and their rules of logic were hard-coded in software: If Condition A occurs, then do X. But if Condition B occurs, then do Y. While they worked reasonably well for specialized tasks such as playing chess, they were "fragile," says Eric Horvitz, an AI researcher at Microsoft Research.

"They focused on capturing chunks of human knowledge, and then the idea was to assemble those chunks into reasoning systems that would have the expertise of people," Horvitz says. But they couldn't "scale," or adapt, to conditions that had not explicitly been anticipated by programmers.

Today, AI systems can perform useful work in "a very large and complex world," Horvitz says. "Because these small [software] agents don't have a complete representation of the world, they are uncertain about their actions. So they learn to understand the probabilities of various things happening, they learn the preferences [of users] and costs of outcomes and, perhaps most important, they becoming self-aware."

These abilities derive from something called machine learning, which is at the heart of many modern AI applications. In essence, a programmer starts with a crude model of the problem he's trying to solve but builds in the ability for the software to adapt and improve with experience. Speech recognition software gets better as it learns the nuances of your voice, for example, and over time more accurately predicts your preferences as you shop online.

Follow Us

Join the newsletter!

Error: Please check your email address.

Tags artificial intelligence

Show Comments