Generating Embodied Information Presentations


Theune, M. and Heylen, D. and Nijholt, A. (2005) Generating Embodied Information Presentations. In: O. Stock & M. Zancanaro (Eds.), Multimodal Intelligent Information Presentation. Kluwer Academic Publishers, pp. 47-69. ISBN 9781402030499

open access
Abstract:The output modalities available for information presentation by embodied, human-like agents include both language and various nonverbal cues such as pointing and gesturing. These human,
nonverbal modalities can be used to emphasize, extend or even replace the language output produced by the agent. To deal with the interdependence between language and nonverbal signals, their production processes should be integrated. In this chapter, we discuss the issues involved in extending a natural language generation system with the generation of nonverbal signals. We sketch a general architecture for embodied language generation, discussing the interaction between the production of nonverbal signals and language generation, and the different factors influencing the choice between the available modalities. As an example we describe the generation of route descriptions by an embodied agent in a 3D environment.
Item Type:Book Section
Copyright:© 2005 Kluwer Academic Publishers.
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:
Official URL:
Export this item as:BibTeX
HTML Citation
Reference Manager


Repository Staff Only: item control page

Metis ID: 221067