Academic journal article Attention, Perception and Psychophysics

Does Crossmodal Correspondence Modulate the Facilitatory Effect of Auditory Cues on Visual Search?

Academic journal article Attention, Perception and Psychophysics

Does Crossmodal Correspondence Modulate the Facilitatory Effect of Auditory Cues on Visual Search?

Article excerpt

Published online: 31 May 2012

© Psychonomic Society, Inc. 2012

Abstract The "pip-and-pop effect" refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch-brightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.

Keywords Multisensory processing * Visual search * Color * Lightness * Brightness

Although our senses are subjected to a near-constant barrage of incoming information, our brains appear to effortlessly combine these inputs into meaningful multisensory percepts representing the objects and events that fill the environments in which we live. The question of how the brain "knows" which stimuli to integrate and which to keep separate con- stitutes the core of the crossmodal binding problem, which poses a major challenge to researchers working in the area of multisensory perception (see Spence, Ngo, Lee, & Tan, 2010). Some basic principles have already been identified, such as the temporal and spatial rales (Meredith & Stein, 1986; Stein & Meredith, 1993). According to these rales, multisensory integration is more likely to occur when the constituent unimodal stimuli are co-localized in space and time (but see Spence, 2012). Additional criteria are, howev- er, needed in order to explain why certain stimuli are inte- grated more efficiently than others. Recently, Spence (2011) reviewed the evidence pointing to the need to consider crossmodal correspondences as a possible additional con- straint on crossmodal binding. The term "crossmodal corre- spondence" refers to our cognitive system's tendency to preferentially associate certain features or dimensions of stimuli across sensory modalities. This has been demonstrat- ed in many different studies using a variety of experimental paradigms (see Spence, 2011, for a review). While some authors have also spoken of crossmodal correspondences when they referred to explicit semantic relations between the sensory representations of the same object in different modalities (e.g., the sound of an engine and the picture of a car), here we will use the term exclusively for those corre- spondences between simple sensory features. The experiments reported here focus on the nature (and conse- quences) of the crossmodal correspondences that exist be- tween auditory pitch and the various features of visual stimuli.

One of the first systematic investigations of crossmodal correspondences was conducted by Pratt (1930). He pre- sented tones of different pitches from a hidden loudspeaker and asked participants to indicate the vertical location from which the tones appeared to have originated, using a numer- ical scale arranged from floor to ceiling. The results revealed that participants assigned higher-pitched tones to higher numbers. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.