Academic journal article Perception and Psychophysics

Multimodal Access to Verbal Name Codes

Academic journal article Perception and Psychophysics

Multimodal Access to Verbal Name Codes

Article excerpt

Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.

Environmental stimuli routinely produce multimodal sensory signals. These multimodal signals are initially encoded in separate sensory pathways that converge in certain subcortical (e.g., Dräger & Hubel, 1976; Jay & Sparks, 1984; Meredith & Stein, 1983) and cortical (e.g., Giard & Peronnet, 1999; Iacoboni, Woods, & Mazziotta, 1998; Kimura & Tamai, 1992; Wallace, Meredith, & Stein, 1992) areas. Behavioral studies have shown that congruent multimodal stimuli generally facilitate sensory processing (Todd, 1912), particularly when one signal is degraded or ambiguous (Bernstein, Chu, & Briggs, 1973; Sumby & Pollack, 1954). The processing advantage conferred by presentation of two stimuli, relative to either stimulus presented alone, is called the redundant signals e#ec/(Diederich, 1995; Diederich & Colonius, 1991; Diederich, Colonius, Bockhorst, & Tabeling, 2003; J. Miller, 1982, 1986). A robust redundant signals effect is often observed in studies of multimodal processing. In these experiments, observers are presented with multimodal (auditory and visual [A + V]) or with unimodal (auditory [A] or visual [V]) stimuli, and are required to respond identically to all stimuli (e.g., manual simple reaction time [RT], manual choice RT, or saccadic eye movement). The dependent variable in these behavioral paradigms can be either response accuracy (Ashby & Townsend, 1986; D. M. Green & Swets, 1966) or RT (Diederich & Colonius, 2004; Hughes, Nelson, & Aronchick, 1998; Hughes, Reuter-Lorenz, Nozawa, & Fendrich, 1994; J. Miller, 1982; Mordkoff & Yantis, 1991; Nickerson, 1973; Nozawa, Reuter-Lorenz, & Hughes, 1994; Raab, 1962; Townsend & Nozawa, 1995). The typical finding is that responses to multimodal signals are faster and more accurate than those to the unimodal signals, an effect that is also called bimodal summation.

There are several mechanisms that could produce the redundant signals effect, in general, and bimodal summation, in particular. Each of them represents a variant of parallel processing. To suggest that the processing of auditory and visual inputs occurs in parallel seems quite natural, since the initial processing of each modality occurs in different modality-specific pathways. Parallel processing simply means that information processing within these modality-specific pathways occurs concurrently. Processing may proceed in both the auditory and visual channels simultaneously, but need not end at the same time. Theoretical work has identified several important concepts related to parallel processing and has illustrated how they affect predicted levels of performance in a parallel processing architecture (e. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.