Academic journal article Cognitive, Affective and Behavioral Neuroscience

Vocal Emotions Influence Verbal Memory: Neural Correlates and Interindividual Differences

Academic journal article Cognitive, Affective and Behavioral Neuroscience

Vocal Emotions Influence Verbal Memory: Neural Correlates and Interindividual Differences

Article excerpt

Published online: 8 December 2012

© Psychonomic Society, Inc. 2012

Abstract Past research has identified an event-related potential (ERP) marker for vocal emotional encoding and has highlighted vocal-processing differences between male and female listeners. We further investigated this ERP vocal-encoding effect in order to determine whether it predicts voice-related changes in listeners' memory for verbal interaction content. Additionally, we explored whether sex differences in vocal processing would affect such changes. To these ends, we presented participants with a series of neutral words spoken with a neutral or a sad voice. The participants subsequently encountered these words, together with new words, in a visual word recognition test. In addition to making old/new decisions, the participants rated the emotional valence of each test word. During the encoding of spoken words, sad voices elicited a greater P200 in the ERP than did neutral voices. While the P200 effect was unrelated to a subsequent recognition advantage for test words previously heard with a neutral as compared to a sad voice, the P200 did significantly predict differences between these words in a concurrent late positive ERP component. Additionally, the P200 effect predicted voice-related changes in word valence. As compared to words studied with a neutral voice, words studied with a sad voice were rated more negatively, and this rating difference was larger, the larger the P200 encoding effect was. While some of these results were comparable in male and female participants, the latter group showed a stronger P200 encoding effect and qualitatively different ERP responses during word retrieval. Estrogen measurements suggested the possibility that these sex differences have a genetic basis.

Keywords Sex differences . Gender differences . Prosody . Affective . Neuroimaging . Hormone . Oxytocin . Brain


In anger, sadness, exhilaration, or fear, speech takes on an urgency that is lacking from its normal, even-tempered form. It becomes louder or softer, more hurried or delayed, more melodic, erratic, or monotonous. But, irrespective of the type of change, the emotions that break forth acoustically make speech highly salient. Thus, a manager who scolds her personnel, a child who cries for his mother, or an orator pleading with a shaking voice readily capture a listener's attention. How do emotional voices achieve such influence, and is this influence sustained beyond the immediate present?

The first part of this question has been addressed extensively. Using online measures such as scalp-recorded eventrelated potentials (ERPs) or functional magnetic resonance imaging (fMRI), researchers have identified auditory processing mechanisms that derive emotional meaning from voices quickly and seemingly automatically. Evidence for fast vocal emotional processing has come from Paulmann and others (Paulmann & Kotz, 2008; Paulmann, Seifert & Kotz 2010; Sauter & Eimer, 2010), who compared the processing of emotionally and neutrally spoken sentences that were attended and task-relevant. The researchers found that, relative to neutral speech, emotional speech increased the P200-a centrally distributed positive component that reaches its maximum around 200 ms following stimulus onset. They showed this effect for a wide range of vocal emotions, including anger, fear, disgust, happiness, pleasant surprise, and sadness, and they suggested that it reflects early detection of emotional salience (Paulmann & Kotz, 2008; Paulmann et al., 2010).

Other researchers have explored the automaticity of processing emotional voices. For example, Grandjean et al. (2005) conducted an fMRI study that used a dichotic listening task during which participants heard neutral and angry pseudospeech. The participants were instructed to discriminate male from female voices in one ear and to ignore the voices presented to the other ear. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.