general gesture interface. Based on that, it reacts to the gestures, generates the application-dependent feedback, if necessary, and controls the devices assigned to the activated region. The devices have interfaces through which they receive the control signals from the computer.
A problem of interaction with several complex devices is that the user has to remember the associated gestures and their functionality. An investigation shows that only about 24 static gestures are easy to perform, and half of them are very easy. In ARGUS, a reduction of the number of necessary gestures is achieved by grouping the device control operations into device independent classes of similar or equal functionality according to qualitative criteria. The loudness of a tuner or a TV-set, for example, has equal functionality and should be controlled by the same gesture. The "next/previous title" function of the CD player is similar to the "next/previous program" function of the TV-set and is controlled by the same gesture. This reduces the overall number of gestures and makes the dialog more intuitive and easier to remember. Additional functionality is achieved by gesture sequences.
Much of the ARGUS system is still of conceptual nature. A prototype implementation exists which demonstrates that selection by pointing, in combination with simple gestures, can basically be performed. On a Pentium PC, 133 MHz, 16 MB, a rate of 5 to 15 processed frames per second could be achieved. It turned out that rates below 10 frames per second are too low for a natural and comfortable dialog. Experiments showed that the angular resolution of pointing is about 1.15°, that is two points that have an angular distance of at least this amount relative to the user can be distinguished.
Kohler, M. & Schröter, S. ( 1998). "Hand gesture recognition by computer vision" (in German). In Dassow, J. & Kruse, R. (Eds.): Informatik '98 pp. 201-212. Heidelberg: Springer (also Research Report No. 693 (in English), Department of Computer Science, University of Dortmund, Germany, and http://ls7-www.cs.unidortmund.de/research/gesture/)
Kohler, M. ( 1999). New Contributions to Visio-Based Human-Computer Interaction in Local and Global Environments, Ph.D. thesis, Department of Computer Science, University of Dortmund, Germany
Siemens AG ( 1997). "SIVIT -- Siemens Virtual Touchscreen". Siemens AG, Abt. ZT IK 5, Munich, Germany