Fusion of audio and visual cues for laughter detection


Share/Save/Bookmark

Petridis, Stavros and Pantic, Maja (2008) Fusion of audio and visual cues for laughter detection. In: International Conference on Content-Based Image and Video Retrieval, CIVR 2008, 7-9 July 2008, Niagara Falls, Canada.

[img]PDF
Restricted to UT campus only
: Request a copy
613Kb
Abstract:Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audio- visual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to improved performance over single-modal approaches. Each channel consists of 2 streams (cues), facial expressions and head movements for video and spectral and prosodic features for audio. We used decision level fusion to integrate the information from the two channels and experimented using the SUM rule and a neural net- work as the integration functions. The results indicate that even a simple linear function such as the SUM rule achieves very good performance in audiovisual fusion. We also experimented with different combinations of cues with the most informative being the facial expressions and the spectral features. The best combination of cues is the integration of facial expressions, spectral and prosodic features when a neural network is used as the fusion method. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves over 90% recall rate and over 80% precision.
Item Type:Conference or Workshop Item
Copyright:© 2008 ACM
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/62669
Official URL:http://dx.doi.org/10.1145/1386352.1386396
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 255087