Emotion Recognition based on Multimodal Information


Share/Save/Bookmark

Zeng, Zhihong and Pantic, Maja and Huang, Thomas S. (2009) Emotion Recognition based on Multimodal Information. In: Affective Information Processing. Springer Verlag, London, pp. 241-266. ISBN 9781848003057

[img] PDF
Restricted to UT campus only

1MB
Abstract:Here is a conversation between an interviewer and a subject occurring in an Adult Attachment Interview (Roisman, Tsai, & Chiang, 2004). AUs are facial action units defined in Ekman, Friesen, and Hager (2002).
The interviewer asked: “Now, let you choose five adjective words to describe your childhood relationship with your mother when you were about five years old, or as far back as you remember.”
The subject kept smiling (lip corner raiser AU12) when listening. After the interviewer finished the question, the subject looked around and lowered down her head (AU 54) and eyes (AU 64). Then she lowered and drew together the eyebrows (AU4) so that severe vertical wrinkles and skin bunching between the eyebrows appeared. Then her left lip raise[d] (Left AU10), and finger scratched chin.
After about 50 second silence, the subject raise her head (AU53) and brow (AU1+AU2), and asked with a smile (AU12): “Should I . . . give what I have now?” The interviewer response with smiling (AU12): “I guess, those will be when you were five years old. Can you remember?”
The subject answered with finger touching chin: “Yeap. Ok. Happy (smile, AU6+AU12), content, dependent, (silence, then lower her voice) what is next (silent, AU4+left AU 10), honest, (silent, AU 4), innocent.”
Item Type:Book Section
Copyright:© 2009 Springer
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/69473
Official URL:http://dx.doi.org/10.1007/978-1-84800-306-4_14
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 264299