Language Based on Intuitive Motion Primitives
Shan Lua, Hiroyuki Sakatob, Tsuyoshi Uezonoc, and Seiji Igia
aCommunications Research Laboratory
4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795, Japan
bThe University of Electro-Communications
c Kagoshima Prefectural Institute of Industrial Technology
Sign language is the most important communicational tool for the hearingimpaired. Currently there are some services to assist the hearing impaired such as TV news programs that use sign language, and translation services (via phone companies) that translate spoken language into sign language. With the dramatic growth of the Internet, even more sign-language services are expected. However, these services can be difficult to develop since sign language contains many visual and dynamic factors such as hand motion, facial expression, and gesture.
Since sign language grammar is different from spoken language grammar, the synthesis of sign language is a two-step process. The first step is the linguistic translation from text to signs, and the second step is the visual synthesis of the translated signs, which is the focus of this paper.
There have been some methods that synthesize sign language by computer. There are basically methods: the first method uses sign-language motion data acquired from the motion capture system ( Lu 1997b, Ohki 1994), and the second method divides a sign into more basic elements such as the phoneme in spoken language ( Nagashima 1996, Kurokawa 1992). Since the first method generally uses the actual movement of actors speaking a sign word as a unit, it is not easy for an user to edit and modify the motion. Acquiring the motion data by using the motion capture system is also difficult. The second method adopts the similar phoneme to spoken language, but it is not intuitive to represent the visual factors of sign language.