New Generation Computing, 24(2006)97-128
Ohmsha, Ltd. and Springer
Received 30 November 2005
In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In Particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.
Keywords: Lifelike Agent, Multimodal Contents,
Content Description Language, Emotion, Affective Computing.