Artificial Intelligence Research Institute, Spanish Scientific Research Council
The Human-Computer Interaction (HCI) community is showing increasing interest in the integration of affective computing in their technology. Particular attention is being paid to research on emotion recognition, since computer systems should be able to recognize human emotions in order to interact with humans in a more adaptive and natural, human-centered way. However, other aspects of emotion might be equally important, depending on the kind of interaction we are aiming at. My research has focused on two different aspects of affective computing. On the one hand, the generation of affect-driven expressive musical performances that can elicit different emotional states in humans, and that can also contribute to the understanding of how listeners perceive the expressive aspects that the musician intends to communicate. On the other hand, my work focuses on emotion synthesis for action selection and behavior modulation in autonomous agents. Both topics raise many open questions that are equally relevant within the HCI community.
One of the main problems with the automatic generation of music is to achieve the degree of expressiveness that characterizes human performances. The main difficulty arises from the fact that performance knowledge - including not only the sensible use of musical resources, but also the ability to convey emotion - is largely tacit, difficult for musicians to generalize and verbalize. Humans acquire it through a long process of observation, imitation, and experimentation ( Dowling and Hardwood 1986). It seems thus natural to adopt a similar approach for the automatic generation of expressive music. This was the main motivation to develop SaxEx, a Case-Based Reasoning system that turns inexpressive performances of jazz ballads into expressive ones by imitating]