Academic journal article Sign Language Studies

To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

Academic journal article Sign Language Studies

To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

Article excerpt

OVER THE PAST TWO DECADES research on American Sign Language (ASL) has shown that, although the hands and arms articulate most of the content words (nouns, verbs, and adjectives), a large part of the grammar of ASL is expressed nonmanually. The hands and arms do play central grammatical roles, but, in addition, movements of the head, torso, and face are used to express certain aspects of ASL syntax such as functional categories, syntactic agreement, syntactic features, complementizers, and discourse markers. Since the pioneering work on nonmanuals (i.e., parts of ASL not expressed through the arms and hands) by Liddell (1986), BakerShenk (1983, 1986), and Baker-Shenk and Padden (1978), research has increasingly focused upon facial expressions in ASL and their syntactic significance (Neidle et al. 2000; Aarons 1994; Aarons et al. 1992; Baker-Shenk 1985).

It is now well established that ASL requires the use of the face not only to express emotions but also to mark several different kinds of questions: wh-questions (questions using who, what, where, when, or why), yes/no (y/n) questions (Neidle et al. 1997; Petronio and LilloMartin 1997; Baker-Shenk 1983, 1986), and rhetorical questions (Hoza et al. 1997), as well as many other syntactic and adverbial constructs (Anderson and Reilly 1998; Shepard-Kegl, Neidle, and Kegl 1995; Reilly, Mclntire, and Bellugi 1990, Wilbur and Schick 1987; Coulter 1978, 1983; Liddell 1978; Baker-Shenk and Padden 1978; Friedman 1974).

In addition to these grammatical facial expressions and the full range of emotional facial expressions, which Ekman and Friesen (1975, 1978) contend are universal, both spoken and signed languages use facial expressions such as quizzical, doubtful, and scornful, which can be categorized as nonemotional and nongrammatical (NENG). These NENG facial expressions are commonly used during social interaction, without carrying emotional or grammatical meaning. We include them here in order to study a class of facial expressions that exhibits neither the automatic qualities of emotional expressions (Whalen et al. 1998) nor the structured and grammar-specific characteristics of ASL syntax described earlier.

ASL is a language of dynamic visuo-spatial changes that are often difficult to describe but nonetheless essential for our understanding of the language (Emmorey 1995). Grossman (2001) and Grossman and Kegl (submitted) emphasize the need to use dynamic facial expressions (video clips), as opposed to the commonly used static images (photographs), in order to obtain a more realistic assessment of the way in which hearing and deaf people recognize and categorize facial expressions. However, only a few detailed analyses of the production of dynamic emotional and grammatical facial expressions are available in ASL.

Baker-Shenk (1983) and Bahan (1996) have dealt extensively with the development of dynamic ASL facial expressions and their link to the manual components of ASL sentences. They have detailed their development and noted their onset, apex (maximal expression), duration of apex, and offset. Baker-Shenk and Bahan observed these dynamic changes in numerous ASL sentences and looked for common denominators among samples of the same type of expression (e.g., wh-question, y/n question) to determine how specific expression types differ from each other. Baker-Shenk used Ekman and Friesen's Facial Action Coding System (FACS, Ekman and Friesen 1975, 1978) to analyze ASL question faces. In this system, each muscle group of the face is assigned an action unit (AU) number, and the specific combination of AUs defines a given facial expression. Using this technique, Baker-Shenk produced detailed descriptions of several different types of ASL question faces.

This approach, however, encounters some difficulties in describing dynamic features or gestures such as head tilts. For example, when looking at y/n questions, Baker-Shenk found that six samples out of sixteen had a downward head tilt, nine a forward tilt, and three had both. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.