Academic journal article Human Factors

Aurally and Visually Guided Visual Search in a Virtual Environment

Academic journal article Human Factors

Aurally and Visually Guided Visual Search in a Virtual Environment

Article excerpt

INTRODUCTION

Highly visual tasks such as flying and driving rely on the accurate and rapid detection and localization of objects so that appropriate action (e.g., avoidance) can be taken. Because of humans' frontally placed eyes, objects out of our visual field of view (FOV) may pass undetected unless we are assisted by instruments or other aids. The auditory system is well suited to aid visual localization because it has an unlimited field of "view" in that it is sensitive to sounds emanating from locations all around the head. Under optimal conditions, spatial acuity of the visual system is finer than that of the auditory system, but auditory spatial resolution is sufficient to guide the field of best vision to the location of an acoustic source (Heffner & Heffner, 1992).

The utility of auditory cues to visual search has previously been investigated. The presentation of an auditory cue from a speaker in the free field reduces the time taken to find a visual target in three-dimensional (3D) space (Perrott, Cisneros, McKinley, & D'Angelo, 1996). Virtual audio presented over headphones to simulate free-field listening is also effective in reducing the time taken for visual search in laboratory studies (Perrott et al., 1996) and in simulations of realistic flying tasks (Begault, 1993; Begault & Pitman, 1996; Bronkhorst, Veltman, & van Breda, 1996).

Studies comparing the benefits of auditory and visual cues to location have shown that although auditory localization is less accurate than reading relative location from a visual perspective display (Barfield, Cohen, & Rosenberg, 1997), visual searches cued by spatial audio may be as quick as, or quicker than, those cued by traditional radar-style visual displays (Bronkhorst et al., 1996; Perrott et al., 1996). In the latter studies the radar-style visual display was placed away from the initial line of sight, which may have extended search times because it required participants to look away from the search field to interrogate the display. Furthermore, the use of an external eye point in radar-style displays requires additional cognitive processing to remap information onto real-world coordinates. Search may be enhanced if the display is presented in the participant's line of sight, as in a head-up display or helmet-mounted display (HMD), and if it uses an eye point that is coincident with that of the observer (an internal eye point), allowing information to be mapped directly onto physical space. Using an HMD, we measured the time taken for a visual search for targets outside the FOV of the display and investigated the benefits of auditory and visual cues to location. The visual cues were presented in the line of sight and used an internal eye point.

METHOD

Participants

The six volunteers (one woman, five men) who participated had no previous experience using HMDs or training in visual search tasks. All had normal or corrected-to-normal acuity and reported normal hearing. Their average age was 27 years (range 22-47). All but one was employed by the Defence Science and Technology Organisation.

Stimuli and Apparatus

The experiment was run under the control of a computer (Onyx, Silicon Graphics) that presented the auditory and visual displays, timed responses, and monitored head orientation using a six-degrees-of-freedom electromagnetic head tracker (3-Space Fastrak, Polhemus) that sampled head orientation at 60 Hz.

Auditory display. The experiment included three types of auditory display: nonspatial audio, transient spatial audio, and updating spatial audio. The nonspatial audio was a triplet of 50-ms white noise bursts, identical in both ears, that served only as a signal to start the visual search. The spatial auditory displays gave additional information about target direction. Spatial auditory stimuli were individualized in-ear recordings. Each participant sat with his or her head at the center of rotation of a 1. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.