Academic journal article Perception and Psychophysics

Multisensory Processing in the Redundant-Target Effect: A Behavioral and Event-Related Potential Study

Academic journal article Perception and Psychophysics

Multisensory Processing in the Redundant-Target Effect: A Behavioral and Event-Related Potential Study

Article excerpt

Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations. Responses generally were faster for bimodal stimuli than for unimodal stimuli and were faster for spatially congruent than for spatially incongruent bimodal events. ERPs to spatially congruent and spatially incongruent bimodal stimuli started to differ over the parietal cortex as early as 160 msec after stimulus onset. The present study suggests that hearing and seeing interact at sensory-processing stages by matching spatial information across modalities.

A race model requires that this inequality should hold for the cumulative RT distributions for both unimodal and bimodal stimuli. If the redundancy gain surpasses that predicted by the race model, rejecting the race model in favor of a coactivation model is justified. Proponents of the latter model disagree with the separate processing view and suggest that information from the two modality channels is integrated at a particular processing level and subsequently processed as a combined entity. This processing stage gains from redundant information, resulting in faster responses to redundant stimuli.

The race model is typically sufficient to explain the redundancy gain of healthy participants in simple unimodal detection tasks with two classes of visual stimuli (see, e.g., Corballis, 1998,2002; a weak violation was found by Miniussi, Girelli, & Marzi, 1998, which was, however, not statistically tested). Surprisingly, split-brain patients have displayed redundancy gains larger than those predicted by the race model when bilateral stimuli have been used (Corballis, 1998, 2002; Reuter-Lorenz, Nozawa, Gazzaniga, & Hughes, 1995; Roser & Corballis, 2002). Therefore, Roser and Corballis concluded that coactivation occurs at the subcortical, rather than at the cortical, level. By contrast, fast interhemispheric transfer might mask subcortical coactivation in healthy individuals.

For bimodal divided attention tasks, the RT gain observed with bimodal stimuli is usually larger than that predicted by separate activation models, so that coactivation models have been adopted (Giray & Ulrich, 1993; Gondan, Lange, Rosier, & Röder, 2004; Hughes, ReuterLorenz, Nozawa, & Fendrich, 1994; Miller, 1982, 1986, 1991; Molholm et al., 2002; Plat, Praamstra, & Horstink, 2000; Schröger & Widmann, 1998). Different loci at which the coactivation may take place have been suggested: Multisensory interactions may take place (1) at perceptual stages (Hershenson, 1962; Hughes et al., 1994; Molholm et al., 2002), (2) at higher cognitive stages (e.g., decision or memory; Miller, 1982; Mordkoff & Yantis, 1991 ; Schröger & Widmann, 1998), and/or (3) during motor preparation and execution (Diederich & CoIonius, 1987; Giray & Ulrich, 1993; but see Miller, Ulrich, & Lamarre, 2001; Mordkoff, Miller, & Roch, 1996).

With event-related potentials (ERPs), multisensory interactions have typically been investigated using unimodal stimuli of two modalities and their bimodal combination (e.g., auditory/A, visual/V, and bimodal/AV). Unimodal stimuli evoke typical sensory-specific potentials. The two unimodal ERPs are summed and subtracted from the ERPs to bimodal stimuli: AV - (A + V). A nonzero result is interpreted as an interaction of the two modalities. Using this formula, Foxe et al. (2000) demonstrated an interaction of the auditory and the somatosensory system at right-central recording sites, for stimuli presented from the left side, as early as 60 msec after stimulus onset. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.