Audiovisual laughter detection based on temporal features

Share/Save/Bookmark

Petridis, Stavros and Pantic, Maja (2008) Audiovisual laughter detection based on temporal features. In: 20th Belgian-Netherlands Conference on Artificial Intelligence, BNAIC 2008, 30-31 October 2008, Boekelo, Netherlands (pp. pp. 351-352).

open access
[img]
Preview
PDF
261kB
Abstract:Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved performance over single-modal approaches. Static features are extracted on an audio/video frame basis and then combined with temporal features extracted over a temporal window, describing the evolution of static features over time. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves an F1 rate of over 89%.
Item Type:Conference or Workshop Item
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/65265
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 255076