Academic journal article Attention, Perception and Psychophysics

The Effects of Association Strength and Cross-Modal Correspondence on the Development of Multimodal Stimuli

Academic journal article Attention, Perception and Psychophysics

The Effects of Association Strength and Cross-Modal Correspondence on the Development of Multimodal Stimuli

Article excerpt

Published online: 13 November 2014

© The Psychonomic Society, Inc. 2014

Abstract In addition to temporal and spatial contributions, multimodal binding is also influenced by association strength and the congruency between stimulus elements. A paradigm was established in which an audio-visual stimulus consisting of four attributes (two visual, two auditory) was presented, followed by questions regarding the specific nature of two of those attributes. We wanted to know how association strength and congruency would modulate the basic effect that responding to same-modality information (two visual or two auditory) would be easier than retrieving different-modality information (one visual and one auditory). In Experiment 1, association strengths were compared across three conditions: baseline, intramodal (100 % association within modalities, thereby benefiting same-modality retrieval), and intermodal (100 % association between modalities, thereby benefiting different-modality retrieval). Association strength was shown to damage responses to same-modality information during intermodal conditions. In Experiment 2, association strength was manipulated identically, but was combined with crossmodally corresponding stimuli (further benefiting differentmodality retrieval). The locus of the effect was again on responses to same-modality information, damaging responding during intermodal conditions but helping responding during intramodal conditions. The potential contributions of association strength and cross-modal congruency in promoting learning between vision and audition are discussed in relation to a potential default within-modality binding mechanism.

Keywords Multisensory processing . Statistical inference

Statistical learning refers to the successful registration of associative probability in the environment. It is a process that occurs often without awareness (Kim, Seitz, Feenstra, & Shams, 2009) and plays a central role in language acquisition, arguably one of the most important instances of human learning. As such, statistical learning has been extensively studied in infants and adults, using both visual and auditory domains (e.g., Fiser & Aslin, 2002; Saffran, Aslin, & Newport, 1996). Most recently, statistical learning has been extended to audio- visual cases (e.g., Conway & Christiansen, 2006; Mitchel & Weiss, 2011; Robinson & Sloutsky, 2007; van den Bos, Christiansen, & Misyak, 2012; Walk & Conway, 2011), with particular emphasis on the differential constraints associated with learning associations within and between modalities. For example, Conway and Christiansen pointed out that the learning of multiple grammars can benefit from distinct stimulus delivery: learning two grammars within the same dimension (e.g., visual shape) is considerably worse than learning of two grammars either separated across two different attributes within the same modality (e.g., shape and color in the visual domain) or separated across two different attributes of different modalities (e.g., shape in the visual domain and pitch in the auditory domain). Furthermore, the provision of correlated information in a secondary modality might help associative learning in a primary modality (Kawahara, 2007; Mitchel & Weiss, 2011; Robinson & Sloutsky, 2007; van den Bos et al., 2012), even when these correlations are arbitrary, such as the relationships between an auditory syllable and visual color (Glicksohn & Cohen, 2013). In these instances, the use of multiple modalities may serve as a way to make individual grammars more distinctive. Yet, despite these potential benefits, other researchers have observed disadvantages associated with multimodal statistical learning, such as the failure to detect grammar violations, defined by the sequential presentation of triplets divided across multiple modalities (e.g., visual-auditory-visual), relative to triplets defined by a single modality (e.g., visual-visual-visual). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.