A CONFERENCE ORGANIZING COMMITTEE, in their invitation to me, asked for my "observations concerning the significance of sign language research on linguistics-how it has changed the very definitions ot language and how it has allowed us to see and understand language in new and important ways." I am a psycholinguist who studies child language development crosslinguistically. Ever since my early days at Berkeley-when the California School for the Deaf was still next door to us-it has been obvious to me that a full understanding of our capacity for language has to be based on both spoken and signed languages. Moreover, it has always been clear to me that we cannot understand the structures and functions of human languages without careful examination of the many types of languages that are still in use on our planet.
Research on child language in various countries, as well as collaborations with linguists and anthropologists who study various unwritten languages, has showed me that language is embedded in use-in contexts of communicating, planning, thinking, and remembering. Our standard linguistic theories grew out of the analysis of written texts and isolated sentences. Therefore, the theories are limited to a narrow range of language use, and they leave out all of the dimensions of face-to-face interaction that are central to any study of signed languages: eye contact, facial expressions, body posture and movement, and gesture. Add to this all of the gradient phenomena that are available to signers-rate and intensity and expansiveness of movement. These dimensions are equally available to users of spoken languages in the form of intonation patterns, along with rate and intensity of vocal production. Because almost all of these prosodie devices are missing from our writing systems, they have been excluded from most linguistic descriptions of languages. At best, they are allocated to secondary categories with labels such as "extralinguistic," "paralinguistic," or "nonlinguistic."
Beyond these factors, it is evident from serious study of sign language discourse that a full description and analysis require attention to a range of pragmatic factors: eye gaze, facial expression, role shift, and more. Whenever a spoken language uses an actual phonological vocal production like a syllable to express one of these pragmatic notions, it is included in the so-called linguistic description. For example, Japanese and Korean have sentence-final particles that communicate things like "this is something that will surprise you" or "this is something you and I can take for granted." These syllables are part of the linguistic description of those languages. However, when the same thing is done by an intonation pattern in English, it is classified as extralinguistic or paralinguistic and is not part of the grammar. In addition, the facial expressions that communicate these kinds of information aren't even considered in studies of paralinguistic phenomena. In psycholinguistic approaches to spoken languages, cospeech gestures are attracting more attention, and their role in speech production is being debated in psycholinguistics. Slowly, investigators of language are moving away from our ancient written language bias.
Sign languages are obviously relevant to all of these issues. In a number of countries, research on the linguistics of sign language has tried to draw a line between what is "really linguistic" and the rest of the expressive devices that are available to users of a visual/manual language. Many of these devices are labeled as "noninanual" but that very term suggests an unexamined presupposition that the manual component is the place where essentially "linguistic" phenomena are located. I suggest that we don't need to worry about what we categorize as "linguistic" until we have a better understanding of the range of gradient and body and facial components in both signed and spoken languages.
I am not a signer, though I speak a number of spoken languages. …