Multimodal Backchannel Generation for Conversational Agents

Share/Save/Bookmark

Heylen, Dirk (2007) Multimodal Backchannel Generation for Conversational Agents. In: Workshop on Multimodal Output Generation, MOG 2007, 25-26 January 2007, Aberdeen, Scotland.

[img]
Preview
PDF
363Kb
Abstract:Listeners in face-to-face interactions are not only attending to the communicative signals being emitted by the speakers, but are sending out signals themselves in the various modalities that are available to them: facial expressions, gestures, head movements and speech. These communicative signals, operating in the so-called back-channel, mostly function as feedback on the actions of the speaker; providing information on the reception of the signals; propelling the interaction forward, marking understanding, or providing insight into the attitudes and emotions that the speech gives rise to. In order to be able to generate appropriate behaviours for a conversational agent in response to the speech of a human interlocutor we need a better understanding of the kinds of behaviours displayed, their timing, determinants, and their effects. A major challenge in generating responsive behaviours, however, is real-time interpretation, as responses in the back-channel are generally very fast. The solution to this problem has been to rely on surface level cues. We discuss on-going work on a sensitive artificial listening agent that tries to accomplish this attentive listening behaviour.
Item Type:Conference or Workshop Item
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/64602
Proceedings URL:http://www.ctit.utwente.nl/library/proceedings/MOG2007.pdf
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 245986