picture categorisation; implications for interface
Myra P. Bussemakers, Abraham de Haan and Paul M.C. Lemmens
Nijmegen Institute for Cognition and Information, P.O. Box 9104 6500 HE
Nijmegen, The Netherlands, firstname.lastname@example.org
In human-computer interfaces the analogy of human-human communication seems to be a good starting point to optimise interaction ( Brennan, 1990). When two persons are communicating in a face to face situation, a whole range of modalities is used to get the message across. Besides speech and non-verbal auditory cues ('uhu'), people utilise facial expressions and gestures in their attempt to communicate all aspects of an utterance. This can result in a more effective interaction, as studies for instance have shown that carefully manufactured auditory feedback elements seem to improve the effectiveness of computer usage ( Brewster, 1994).
When information is presented in multiple streams, users need to integrate these informational elements into a single experienced unity ( Bussemakers and de Haan , 1998). Every occurrence in an information stream consists of a number of contingencies like colour, location, loudness and mood. In the process of integration, we suppose that some relevant aspects of the contingent information in the modalities seem to be combined. This because the mind expects these modalities to convey related information. The integration occurs for instance when you are listening to a lecture in a big hall. The lecturer is speaking into a microphone and the sound is transmitted through loudspeakers that are located on either side of the hall. Although the sound actually is coming from your left and right, you integrate the movement of the lecturer's lips with the sound in such a way that it seems as if the sound is actually coming from the lecturer (i.e. the ventriloquism effect ( Howard and Templeton, 1966)).