Academic journal article Attention, Perception and Psychophysics

Reward Expectation Influences Audiovisual Spatial Integration

Academic journal article Attention, Perception and Psychophysics

Reward Expectation Influences Audiovisual Spatial Integration

Article excerpt

Published online: 31 May 2014

# The Psychonomic Society, Inc. 2014

Abstract In order to determine the spatial location of an object that is simultaneously seen and heard, the brain assigns higher weights to the sensory inputs that provide the most reliable information. For example, in the well-known ventriloquism effect, the perceived location of a sound is shifted toward the location of a concurrent but spatially misaligned visual stimulus. This perceptual illusion can be explained by the usually much higher spatial resolution of the visual system as compared to the auditory system. Recently, it has been demonstrated that this cross-modal binding process is not fully automatic, but can be modulated by emotional learning. Here we tested whether cross-modal binding is similarly affected by motivational factors, as exemplified by reward expectancy. Participants received a monetary reward for precise and accurate localization of brief auditory stimuli. Auditory stimuli were accompanied by task-irrelevant, spatially misaligned visual stimuli. Thus, the participants' motivational goal of maximizing their reward was put in conflict with the spatial bias of auditory localization induced by the ventriloquist situation. Crucially, the amounts of expected reward differed between the two hemifields. As compared to the hemifield associated with a low reward, the ventriloquism effect was reduced in the high-reward hemifield. This finding suggests that reward expectations modulate cross-modal binding processes, possibly mediated via cognitive control mechanisms. The motivational significance of the stimulus material, thus, constitutes an important factor that needs to be considered in the study of top-down influences on multisensory integration.

Keywords Cognitive and attentional control . Multisensory processing . Spatial localization

Effective interaction with the world requires an optimal use of sensory input. This includes the integration of redundant information derived from the different sensory modalities, such as information about the spatial location of an object that is simultaneously seen and heard (Ernst & Bülthoff, 2004; Stein & Stanford, 2008). A fundamental question in multisen- sory research is how the brain integrates such independent sensory estimates into one coherent percept. According to the modality appropriateness hypothesis (Welch, 1999; Welch & Warren, 1980) and more recent computational models based on maximum likelihood estimation (Ernst & Bülthoff, 2004; Fiser, Berkes, Orbán, & Lengyel, 2010; Sato, Toyoizumi, & Aihara, 2007; Witten & Knudsen, 2005), the sensory modality that is most reliable for a given perceptual problem dominates the multisensory percept. Neither the modality appropriate- ness hypothesis nor the maximum likelihood estimation mod- el, however, takes into account attentional, motivational, emo- tional, or any other factors that could potentially influence the process of cross-modal binding.

An example of visual dominance over audition that has been extensively studied in laboratory settings is the ventrilo- quism effect (Bertelson & de Gelder, 2004;Chen&Vroomen, 2013;Recanzone,2009): When an auditory stimulus is pre- sented together with a spatially misaligned visual stimulus, spatial localization of the auditory stimulus is biased toward the location of the visual stimulus, even if the participants are instructed to ignore the visual stimulus. If, however, the reli- ability of the visual input is reduced by severely blurring the visual stimulus, the effect reverses, and audition captures visual localization (Alais & Burr, 2004), in line with the assumptions of the maximum likelihood estimation model (Sato et al., 2007). On the basis of the available behavioral (Bertelson & Aschersleben, 1998; Bertelson, Vroomen, de Gelder, & Driver, 2000; Vroomen, Bertelson, & de Gelder, 2001), neuropsychological (Bertelson, Pavani, Làdavas, Vroomen, & de Gelder, 2000), and electrophysiological (Bonath et al. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.