Implicit Image Tagging via Facial Information


Share/Save/Bookmark

Jiao, Jun and Pantic, Maja (2010) Implicit Image Tagging via Facial Information. In: 2nd International Workshop on Social Signal Processing, SSPW 2010, 29 October 2010, Florence, Italy (pp. pp. 59-64).

[img] PDF
Restricted to UT campus only
: Request a copy
1MB
Abstract:Implicit Tagging is the technique to annotate multimedia data based on user’s spontaneous nonverbal reactions. In this paper, a study is conducted to test whether user’s facial expression can be used to predict the correctness of tags of images. The basic assumption behind this study is that users are likely to display certain kind of emotion due to the correctness of tags. The dataset used in this paper is users’ frontal face video collected during an implicit tagging experiment, in which participants were presented with tagged images and their facial reactions when viewing these images were recorded. Based on this dataset, facial points in video sequences are tracked by a facial point tracker. Geometric features are calculated from the positions of facial points to represent each video as a sequence of feature vectors, and Hidden Markov Models (HMM) are used to classify this information in terms of behavior typical for viewing a correctly or an incorrectly tagged image. Experimental results show that user’s facial expression can be used to help judge the correctness of tags. The proposed is effective in case of 16 out of 27 participants, the highest prediction accuracy for a single participant being 72.1%, and the highest overall accuracy being 77.98%.
Item Type:Conference or Workshop Item
Copyright:© 2010 ACM
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/75890
Official URL:http://dx.doi.org/10.1145/1878116.1878133
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page