Academic journal article Human Factors

Application of a Three-Dimensional Auditory Display in a Flight Task

Academic journal article Human Factors

Application of a Three-Dimensional Auditory Display in a Flight Task

Article excerpt

INTRODUCTION

A three-dimensional (3D) auditory display presents sounds from arbitrary directions spanning a sphere around the listener. The practical application of such displays has become feasible through the development of techniques for creating virtual sound sources using headphone presentation (Begault, 1993; Begault & Wenzel, 1993; Ricard & Meirs, 1994; Sorkin, Wightman, Kistler, & Elvers, 1989; Wenzel, Arruda, Kistler, & Wightman, 1993; Wenzel, Wightman, & Foster, 1988; Wightman & Kistler, 1989b). The advantages of this display are threefold:

First, in addition to the information contained in the signal itself, relevant directional information can be conveyed using the natural sound-localization ability of humans. Because it is common in high-workload tasks to present information primarily through the visual channel, use of the auditory channel may reduce the workload and shorten reaction times (Perrot, 1988; Wickens, 1984).

Second, it is well known that spatial separation of signal and noise sources lowers the threshold at which signals can be detected and discriminated (Bronkhorst & Plomp, 1988; Levitt & Rabiner, 1967; Ricard & Meirs, 1994). In other words, spatial separation of sound sources improves the effective signal-to-noise ratio. This is important not only for detection and discrimination of noisy signals and speech but also for simultaneous discrimination of multiple (speech) signals.

Third, assigning spatial positions to sound sources improves identification of multiple sounds. For example, when different voices in a telephonic conference originate from different positions relative to the listener, it is easier to follow which speaker is talking, even if several are talking at the same time.

The technique used to create virtual sound sources with headphone sounds is based on simulation of the acoustic effects of the listener's shoulders, head, and external ears by linear transfer functions (head-related transfer functions, or HRTFs). The HRTFs are implemented in digital filters, one for each ear, which modify the signals fed into the headphones. HRTFs are typically measured by recording a test signal, generated by a loudspeaker in a certain direction, with probe microphones placed in the ear canals of a human. Alternatively, the measurements can be performed with a standardized acoustic manikin, such as Knowles Electronics Manikin for Acoustic Research (Burkhard & Sachs, 1975).

When the digital filters are loaded with HRTFs for a specific angle, a virtual sound source is created with a fixed direction relative to the listener's head. In order to create a virtual sound source at a stable point in space, the head position and orientation must be recorded with a tracking device and the filters must be updated in real time so that HRTFs correspond to the relative position of the sound source.

A question relevant to the practical application of 3D virtual auditory displays is whether virtual sound sources can be localized with an accuracy equal to real sources. This question was addressed in studies performed by Wightman and Kistler (1989b) and Wenzel et al. (1993). The task in these studies was to localize a fixed source without making head movements. Source positions covered a 360 [degrees] range of azimuths and elevations from 36 [degrees] below the horizontal plane to 54 [degrees] above it. A well-known phenomenon occurring during localization without head movements is that source positions within cones of confusion (i.e., source positions that cause similar interaural differences in level and arrival time) are easily interchanged (Mills, 1972). Common errors are reflections of the source position about the vertical or horizontal plane passing through the listener's ears. These are called front-back and up-down confusions, respectively.

Wightman and Kistler (1989b) used the individual's own HRTFs in creating virtual sound sources. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.