Magazine article AI Magazine

Introduction to the Special Issue on "Usable AI"

Magazine article AI Magazine

Introduction to the Special Issue on "Usable AI"

Article excerpt

The term usable isn't heard all that much in discussions among AI people. You're more likely to hear it when listening to folks who are interested in the human side of computer use--such as people in the field of human-computer interaction (HCI).

But how much distance is there between these two fields? Maybe not that much. After all, the algorithms developed in AI research are often intended to be deployed in systems that involve some sort of interaction with users. The AI may contribute to the basic functionality of the system, such as the provision of recommendations or the support of task execution; or it may enhance the interfaces of a system, as with systems that enable humanlike forms of communication between the user and the system. We will refer to interactive systems that incorporate some sort of AI technology (or technology that at one time was viewed as belonging to AI) as interactive intelligent systems.

Systems that are supposed to be used by people ought to be usable, taking into account human needs, capabilities, and the contexts of use. The field of HCI has accumulated a large repertoire of methods and principles for designing systems that fulfill this criterion.

So do people contributing AI components to interactive systems need to concern themselves with HCI? The answer can be "no," if one of the two following strategies is applied:

Strategy 1: Work on the technical optimization of algorithms of a type that has already been successfully deployed in usable interactive systems.

In many areas, it is known (from research or experience) that a system component that achieves particular technical goals can be put to good use in interactive systems (for example, accurate methods for information retrieval, recommendation, or machine translation). AI researchers can therefore concentrate on improving their algorithms in terms of accepted metrics, without thinking constantly about users and usability. This general strategy has proved immensely useful--and in many cases probably inevitable--for the improvement of AI technology for interactive systems.

But there are limitations to what AI can contribute to interaction in this way. This approach manages to factor users out of the picture by making some assumptions about the forms that user-system interaction takes and the criteria for its success. When we want to deploy AI in new scenarios, with different success criteria for the AI components, we need to think explicitly about the impact that the AI will have on users. A second strategy often comes into play here:

Strategy 2: Develop AI algorithms that can help to realize an apparently beneficial new form of interaction; leave it to HCI people to design and test usable interfaces.

AI researchers often believe that some technology that they have created can lead to new and improved functionality or interaction styles that can benefit users. They may then produce compelling demonstrators that seem to require only the intervention of skilled interaction designers (if even that) before they can be deployed successfully with users.

This strategy has the benefit of giving an AI-technology push to the advancement of interactive systems, exploiting what AI people know about what is now technically possible with AI. But it also has serious limitations.

When someone does in fact try to deploy the algorithms in question in a system that is really used by people, he or she is likely to discover that some changes to the technology are required before the system becomes truly usable and useful: for example, if an intelligent algorithm for the scheduling of personal activities is involved, it may turn out that users of personal scheduling systems have requirements that cannot be met using the algorithm in question. The algorithm may be based on unrealistic assumptions about how users schedule events in their personal lives or about the extent to which users want to provide explicit input to the system and to be able to understand and to second-guess the system. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.