Edwin Pramana and Ying K. Leung,
School of Information Technology,
Swinbume University of Technology,
P.O. Box 218, Victoria 3122,
The sound card is now becoming a standard component of the personal computer. Increasingly, software applications are exploiting the audio channel to enrich the human-computer interface, using sounds to complement and supplement the visual information presented on the screen. In recent years, two approaches in non-speech auditory interfaces have emerged -- earcons and auditory icons. Earcons and auditory icons are distinct in their respective approaches to the representation problem. Earcons are totally symbolic and incorporate a musical approach while auditory icons may be symbolic, metaphorical or nomic and incorporate everyday listening.
Despite the wide range of research conducted on earcons by researchers, systematic application of earcons at the user interface is scarce. This is perhaps attributed to the fact that humans have limited capacity in learning arbitrary sounds ( Patterson and Milroy, 1980). Whilst technology enables families of earcons to be easily created, their usability is primarily determined by how easily they can be learned ( Leung, Smith, Parker and Martin, 1997). It is demonstrated by the fact that the research conducted on earcon's learnability generally involved a small set of earcons, typically not more than eight.
This paper describes a study to explore the leamability of structured earcons. It is hypothesised that (1) structured earcons are more easily to learn if the user knows how the timbre and pitch of the earcons are mapped to the objects or events they represent and that (2) user's prior knowledge in music affects the leamability of structured earcons. The experiment was conducted in a simulated multi-tasking environment. The results of this study will provide useful