as shown in fig.3 and, in a short time, substitutes the handwriting at the same size. The gesture commands are promptly executed. In case of the wrong actions, the users correct it by the "Undo" botton, the editing command bottons and the recognition command bottons. Taking the users' response as the reward/penalty, the system adapt itself.
The feasibility and the effectiveness were successfully exemplified by the experimental system. We gathered the stroke data which consist of the nearly thousand of characters by a dozen of persons.
Concerning the grouping, surprisingly, the recognition rate 71.5% was achieved only by the shape features (shown in the upper line in fig.4). The lower line shows the recognition rate in case of the data set different from the learning data set.
Note that the handwriting recognition algorithm does not help the grouping and the classification. We think that the tight connection to the handwriting recognition engine will greatly improve the accuracy. Concerning the gesture adaptation, our system could follow the user preference in the real field.
Fig.4 Initial learning and adaptive learning
Wilcox, L., Schilit, B., Sawhney, N. ( 1997). "DYNOMITE: A Dynamically Organized Ink And Audio Note Book", CH197, pp 186-193
Tano, S., et. al. ( 1997). "Design Concept Based on Real-Virtual-Intelligent User Interface and Its Software Architecture", HCI-97, pp.901-904
Tsukiyama, M. and Tano, S. ( 1998). "The design of the pen-based user interface by reinforcement learning", IEICE NCL97-57, pp. 17-24 (in Japanese)
Watkins, C. and Dayan, P. ( 1992). "Technical note : Q-Learning", Machine Learning, Vol. 8, No. 3, pp279-292