Academic journal article Sign Language Studies

Signs in Which Handshape and Hand Orientation Are Either Not Visible or Are Only Partially Visible: What Is the Consequence for Lexical Recognition?

Academic journal article Sign Language Studies

Signs in Which Handshape and Hand Orientation Are Either Not Visible or Are Only Partially Visible: What Is the Consequence for Lexical Recognition?

Article excerpt

RESEARCH ON THE PERCEPTION of sign languages is sparse compared to similar research for spoken languages. Since Stokoe (i960) declared for the first time that sign languages are natural languages, there have been several studies on sign perception. However, there are still many open questions about the processing of signs. We know from spoken language research that speech is highly redundant and that this is necessary because the perceptual system tends to miss much of the presented information. Is this also the case for sign languages? Do signs contain the same high level of redundancy? And if so, are certain elements more important than others?

To answer such questions, we must gain insight into sign perception. A time-honored method of doing so is to select or remove parts of the stimulus material and observe the consequences for recognition. Several studies have been performed on sign languages using this method. Parish, Sperling, and Landy (1990) removed time fragments from a sign movie, thereby reducing it to a set of keyframes. By choosing salient frames, the number of selected frames could be gready reduced without affecting the intelligibility of the sign. Ten Holt et al. (2009) selected certain time fragments that correspond to phases within signs and tested recognition based on these fragments alone. They found that various phases give accurate recognition, suggesting the presence of information redundancy in the time domain. Both Grosjean (198 1) and Emmorey and Corina (1990) conducted gating experiments to determine how well participants could resolve a sign when they saw increasingly larger parts of the beginning of the sign. Grosjean reports that, on average, participants could identify signs after seeing the first 5 1 percent. This pertains to the isolation point, the moment when a participant correcdy identifies the sign and does not subsequendy identify it differendy. Emmorey and Corina (1990) report isolation times of 34 percent of the entire sign, but their definition of the start of the sign differs from Grosjean's. Emmorey and Corina (ibid.) also report recognition times of 44 percent of the entire sign. The recognition point is defined as the moment a participant indicates an 80 percent confidence score for identification of the sign (Grosjean does not report on the recognition point). In a recent study Arendsen, van Doom, and de Ridder (2007) asked participants to watch a movie and respond as quickly as possible when they saw the beginning of a sign language sign. Participants needed to see only about a third of the sign to make this identification. These studies all suggest the presence of a reasonable amount of redundancy in signs in the time domain.

Signs can be subdivided in time, but they can also be altered by removing or obscuring some of their formational parameters (e.g., movement, location, handshape [Stokoe, Casterline, and Croneburg 1965], hand orientation [Battison 1974], and the nonmanual component). Though the organization of the parameters remains a topic of discussion (Emmorey 2002), it is generally agreed that these five variables determine the meaning of a sign. The status of the nonmanual component as a formational parameter, however, remains unclear; see Emmorey (ibid.) for a discussion. However, in Sign Language of the Netherlands (SLN), nonmanual components (more specifically mouth patterns) are considered to be a parameter with the same status as the other four (Schermer et al. 1991).1

It is possible to study the relative importance of the formational parameters by obscuring one or more of them and observing the effect on perception and recognition of signs. Poizner, Bellugi, and LutesDriscoll (198 1) removed handshape, hand orientation, and the nonmanual component by recording signs as point-light displays. One of their experiments was a lexical recognition test, which resulted in a recognition accuracy of about 80 percent. However, in this test all of the signs had the same handshape, one known to the participants. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.