Academic journal article Canadian Journal of Experimental Psychology

Memory for Pictures and Sounds: Independence of Auditory and Visual Codes

Academic journal article Canadian Journal of Experimental Psychology

Memory for Pictures and Sounds: Independence of Auditory and Visual Codes

Article excerpt

Abstract Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture - sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal - inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture - sound items. In addition, recall for the picture - sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture - sound condition was greater than recall in either single - modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

To date, most of the research dealing with the representation of non - verbal information has focussed on either visual imagery or picture memory. Less attention has been given to other sensory components of a memory trace and how these might contribute to memory performance. The experiments reported here examined the mnemonic independence of auditory and visual information using free recall of pictures, sounds, and picture - sound pairs. In particular, we tested the hypothesis that the visual and auditory representations of such objects as telephones, bells, and whistles are functionally independent in memory.

Functional independence of memory codes implies that different encodings of the same item are represented in such a manner as to permit independent access to and retrieval of the component traces. In particular, this position assumes that the components of multimodal objects retain their modality - specific individuality in memory rather than being fused into a homogeneous or amodal entity, as implied by single - code theories of cognitive representations (see Anderson, 1978; Kieras, 1978; Snodgrass, 1984, for discussions of this viewpoint). Independent retrieval of memory components could arise from several types of representation. One possibility is that a single representation of the stimulus is created, but that the various sensory components of this trace can be accessed separately. Alternatively, the various components may not be integrated in any way, but may instead function as unique encodings of the same item. However, regardless of the representational assumptions made, the independence hypothesis implies that the encoding or retrieval of trace components should be subject to selective interference, selective forgetting of one or an other component, and that additive effects of multiple components on memory performance should be observed under appropriate conditions. To date, the empirical findings have offered inconsistent support for the independence hypothesis.

For example, results obtained using the selective interference paradigm have generally supported the independence hypothesis. Several investigators have reported that a visual signal is more difficult to detect when subjects are performing a visual imagery task compared to an auditory one and, conversely, auditory signal detection is hampered more by an auditory than a visual imagery task (DiVesta & Bartoli, 1982; Segal & Fusella, 1970). Glass, Millen, Beck, and Eddy (1985) found that sentences high in visual imagery took longer to verify when presented visually than auditorily. However, sentences high in auditory imagery did not show similar modality - interference effects. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.