Academic journal article Attention, Perception and Psychophysics

Interdependent Processing and Encoding of Speech and Concurrent Background Noise

Academic journal article Attention, Perception and Psychophysics

Interdependent Processing and Encoding of Speech and Concurrent Background Noise

Article excerpt

Published online: 14 March 2015

# The Psychonomic Society, Inc. 2015

Abstract Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.

Keywords Selective attention . Speech perception . Implicit/ explicit memory

In everyday conversations, listeners must sift through multiple dimensions of the incoming auditory input in order to extract the relevant linguistic content. The speech signal contains not only linguistic material but also indexical information, which includes the particular voice and articulatory characteristics of the speaker that would enable a listener to identify the speaker's gender or individual identity. Moreover, listeners must also contend with the fact that, in many situations, environmental noise will co-occur with the speech signal. Although robust evidence suggests that the linguistic and indexical dimensions of speech are integrally processed during speech perception (e.g., Mullennix & Pisoni, 1990), relatively little research has been conducted on whether or not linguistically irrelevant environmental noise1 is also processed integrally and/or encoded in memory with the linguistic and indexical attributes of a speech event during speech processing. Thus, in the present study we investigated (a) the extent to which indexical speech features and background noise are processed interdependently at a relatively early stage of processing, using the Garner speeded classification paradigm (following Garner, 1974; Exp. 1), and (b) whether the consistency of concurrently presented background noise from a first to a second occurrence can serve as a facilitatory cue for recognition of the word as having occurred earlier in a list of spoken words (following Palmeri, Goldinger, & Pisoni, 1993, and Bradlow, Nygaard, & Pisoni, 1999;Exp.2).

Integration of indexical and linguistic information

Traditional models of spoken word recognition have assumed that linguistic processing operates over abstract symbolic representations, and that nonlinguistic features of the speech signal, such as indexical information, are stripped away from the linguistic content during speech processing and encoding (see Pisoni, 1997, for a review). However, a growing body of literature has demonstrated that linguistic and indexical information are perceptually integrated and encoded during speech processing (e.g., Bradlow et al., 1999; Church & Schacter, 1994; Cutler, Andics, & Fang, 2011; Goldinger, 1996; Kaganovich, Francis, & Melara, 2006; Mullennix & Pisoni, 1990; Nygaard, Sommers, & Pisoni, 1994; Palmeri et al. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.