User-Friendly Machines Help Boost Performance in Robots

Article excerpt

Technological advances in the field of human-computer interaction will pay off dividends for U.S. military programs focusing on battlefield robots and tactical data networks, experts said.

Jim Osborne, a robotics specialist with the Pittsburgh Robotics Initiative, believes that autonomous machines today have enough intelligence to govern their own actions.

But not everyone agrees. Some experts believe that autonomous capabilities are mature and advanced enough for full deployment in robotic platforms, while others advocate a more cautious approach.

Osborne predicts that U.S. military programs will support a "mixed mode of operations," including both fully autonomous robots and others that need more human control.

The "program to watch" for robotics technology development is the Army's Future Combat System, said Osborne. Under this project, the Army plans to develop a fleet of light combat vehicles, some of which may be remotely operated. "The success of this program will shape the future of military robotics," Osborne said. "We'll see how well the rubber meets the road."

Additionally, said Osborne, there are many other areas where the U.S. military services will benefit from robotics technology. These include de-mining, and search-and-rescue missions.

Some of the challenges in developing advanced robots can be attributed to the fact that "computers are hard to use," said Clinton Kelly, senior vice president of advanced technology programs at Science Applications International Corporation, in San Diego. "It is unarguable that computers have changed the way we do business, but it is debatable whether or not they have made it more productive," Kelly told a conference of the Government Electronics and Information Technology Association (GETA).

Computers make demands on certain cognitive abilities such as logical reasoning and spatial memory, he explained. Users who score in the top 25 percent in logical and spatial ability do twice as well with a computer as the lowest 25 percent. "We concluded that one out of three college-educated people can't use computers very effectively," Kelly said.

More research work, therefore, is needed in human-computer interaction, he stressed. One of the most promising areas is speech recognition. Systems are available today for about $200. The software available has a large vocabulary, from 50,000 to 100,000 words in multiple languages. The training time has been reduced to about three to 10 minutes, with a 98 percent recognition accuracy, Kelly said. One growing application for voice recognition technology is to retrieve one's e-mail.

Interface devices will lead to what Kelly calls "the age of proxy devices or the post-PC era" in the next three years. He believes that general-purpose computers will be replaced with simpler devices, set up for specific tasks such as e-mail or Web-surfing.

While speech recognition--or converting audible signals to digital symbols is a tough problem--an even harder one is getting computers to understand natural language, the actual meaning of words. Kelly summed up the problem of semantic ambiguity. "You rake a word like 'strike' and it has something like 20 to 40 definitions. The average common noun out of the top 200 in usage has about eight meanings and the average common verb about 12 meanings."

There are syntactical ambiguities as well. Any one sentence could potentially have at least half a dozen meanings, Kelly explained.

The way to deal with these problems is to develop computers that know and reason, according to Kelly. "Turns out we've been doing that in the machine intelligence community for a long time. We have created systems that are called knowledge-based systems or expert systems." Kelly said. He cited a project known as Cyc, short for encyclopedia, at Cycorp in Austin, Texas. "They have something like a billion actions in their database that reflects about 10 years of work. …