Academic journal article Cognitive, Affective and Behavioral Neuroscience

Contextual Influences of Emotional Speech Prosody on Face Processing: How Much Is Enough?

Academic journal article Cognitive, Affective and Behavioral Neuroscience

Contextual Influences of Emotional Speech Prosody on Face Processing: How Much Is Enough?

Article excerpt

The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.

Efforts to understand how the brain processes emotional information have increased rapidly over the last years. New studies are shedding light on the complex and diverse nature of these processes and on the speed with which emotional information is assimilated to ensure effective social behavior. The emotional significance of incoming events needs to be evaluated within milliseconds, followed by more conceptual-based knowledge processing, which often regulates emotionally appropriate behaviors (cf. Phillips, Drevets, Rauch, & Lane, 2003). In everyday situations, emotional stimuli are not often processed in isolation; rather, we are confronted with a stream of incoming information or events from different sources. To advance knowledge of how our brain successfully processes emotions from different information sources and how these sources influence each other, we investigated how and when emotional tone of voice influences the processing of an emotional face, a situation that occurs routinely in face-to-face social interactions.

Effects of Emotional Context

The importance of context in emotion perception has been emphasized by several researchers (e.g., de Gelder et al., 2006; Feldmann Barett, Lindquist, & Gendron, 2007; Russell & Fehr, 1987). Context may be defined as the situation or circumstances that precede and/or accompany certain events, and both verbal contexts (created by language use) and social contexts (created by situations, scenes, etc.) have been shown to influence emotion perception. For example, depending on how the sentence, "Your parents are here? I can't wait to see them," is intoned, the speaker may be interpreted either as being pleasantly surprised by the unforeseen event or as feeling the very opposite. Here, emotional prosody- that is, the acoustic variations of perceived pitch, intensity, speech rate, and voice quality-serves as a context for interpreting the verbal message. Emotional prosody can influence not only the interpretation of a verbal message but also the early perception of facial expressions (de Gelder & Vroomen, 2000; Massaro & Egan, 1996; Pourtois, de Gelder, Vroomen, Rossion, & Crommelinck, 2000). The latter reports have led to the proposal that emotional information in the prosody and face channels is automatically evaluated and integrated early during the processing of these events (de Gelder et al., 2006). In fact, recent evidence from event-related brain potentials (ERPs) is consistent with this hypothesis; early ERP components, such as the N100 (and other related components), are known to be differently modulated for affectively congruent and incongruent face-voice pairs (Pourtois et al., 2000; Werheid, Alpay, Jentzsch, & Sommer, 2005). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.