Understanding Japanese Sign Language
Sumihiro KAWANO and Takao KUROKAWA
Graduate School of Engineering and Science, Kyoto Institute of Technology,
Matsugasaki, Sakyo-ku, Kyoto 606-8585, JAPAN
Japanese Sign Language (JSL) is a native language for auditory handicapped people in Japan. It is multi-modal, and it is signaled in many kinds of body actions which include head movement, eye movement, posture, facial expression as well as manual gesture. It is difficult for hearing people to acquire JSL and to communicate with the hearing impaired in it. In order to solve this problem and improve the present communication environment for the hearing impaired, the authors are developing a system which translates Japanese into JSL and vice versa [ 1]. In this system, the results, or JSL sentences, of translation from Japanese sentences are shown to non-hearing people by means of sign animation featuring a model human [ 2]. The animation is synthesized by a rule-based method. In the previous study [ 3][ 4], we analyzed movements such as nodding, blinking and gaze behavior appearing on a sign interpreter's face while talking in JSL and found their positive effects on understanding JSL sentences and their structures through an experiment using the animation. In the experiment two kinds of the JSL animation were produced; in one the model human talked in JSL with only hand gesture and in the other with facial movement as well as hand gesture. Subjects watched one of them and responded with their recognized meanings of JSL sentences. There were clear correlations between occurrence facial movement and correct recognition of the words and sentences.
But as to facial expression its effects on understanding JSL have not been known, although it is said to be an important component in JSL. The purpose of this paper is to analyze roles of facial expression in JSL and to improve the JSL animation based on the results. It will also describe the effects of the