Multipe users' communication system in cyberspace is composed of several subprocesses expressed as follows.
At client system, on-line captured voice of each user is A/D converted by 16KHz and 16bits, and is transmitted to server system frame-by-frame through network. In a intra-net environment, no speech compression is needed but for internet application, high compression is inevitable. Current prototype system is implemented on intra-network environment.
At server system, voice from each client is phonetically analyzed and converted to mouth shape and expression parameters. LPC Cepstrum parameters are converted into mouth shape parameters by neural network trained by vowel features. Fig.3 shows neural network structure for parameter conversion. Fig.4 shows an example