Examinations of Identity Invariance in Facial Expression Adaptation

Article excerpt

Faces provide a wealth of information essential to social interaction, including both static features, such as identity, and dynamic features, such as emotional state. Classic models of face perception propose separate neural-processing routes for identity and facial expression (Bruce & Young, 1986), but more recent models suggest that these routes are not independent of each other (Calder & Young, 2005). Using a perceptual adaptation paradigm in the present study, we attempted to further examine the nature of the relation between the neural representations of identity and emotional expression. In Experiment 1, adaptation to the basic emotions of anger, surprise, disgust, and fear resulted in significantly biased perception away from the adapting expression. A significantly decreased aftereffect was observed when the adapting and the test faces differed in identity. With a statistical model that separated surface texture and reflectance from underlying expression geometry, Experiment 2 showed a similar decrease in adaptation when the face stimuli had identical underlying prototypical geometry but differed in the static surface features supporting identity. These results provide evidence that expression adaptation depends on perceptual features important for identity processing and thus suggest at least partly overlapping neural processing of identity and facial expression.

Humans are social beings, and face perception is an essential aspect of social cognition. Faces provide a great amount of information critical to social interaction, including both static features, such as information about a person's age, gender, race, and identity, and dynamic or changeable features, such as those that communicate one's internal emotional state (Calder & Young, 2005; Haxby, Hoffman, & Gobbini, 2000). Classic models of face perception (Bruce & Young, 1986) propose separate, specialized, and parallel processing routes for these distinct feature sets-in particular, the recognition of identity and facial expression. A functional analysis suggests that it is advantageous to be able to identify a person regardless of their facial expression, with an abstract, invariant representation of an identity being transferred across different facial expressions much as it transfers across different viewpoints. Conversely, since it is also useful to be able to infer the same emotional states from similar expressions of different individuals, abstract, invariant representations of different facial expressions need to be transferred across different people.

Supporting this functional segregation, research has shown that distinct neural processes may support identity and facial expression analyses. A double dissociation between the ability to identify faces and the ability to interpret expressions has been observed in both brain-damaged patients (e.g., Bruyer et al., 1983; Kurucz & Feldmar, 1979; Kurucz, Feldmar, & Werner, 1979) and healthy subjects (Bruce, 1986). The separation between the processing of static invariant (e.g., identity) and dynamic variable (e.g., expression) aspects effaces has also been observed in neurophysiological studies, with preferential processing of the former by the inferior temporal gyrus (the middle fusiform gyms in humans; Grill-Spector, Knouf, & Kanwisher, 2004; Kanwisher, McDermott, & Chun, 1997) and preferential processing of the latter by the superior temporal sulcus (STS; Haxby et al., 2000; Narumoto, Okada, Sadato, Fukui, & Yonekura, 2001; Winston, Henson, FineGoulden, & Dolan, 2004). In addition, neuropsychological and neuroimaging evidence suggests that the recognition of categorically distinct prototypical emotions depends on further differentiated neural representations, such as the amygdala for fear (e.g., Adolphs et al., 1999; Anderson, Christoff, Panitz, De Rosa, & Gabrieli, 2003; Calder, Lawrence, & Young, 2001 ) and the right anterior insula for disgust (e. …

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.