AFFECT BURST RECOGNITION USING MULTI-MODAL CUES

被引:0
|
作者
Turker, Bekir Berker [1 ]
Marzban, Shabbir [1 ]
Erzin, Engin [1 ]
Yemez, Yucel [1 ]
Sezgin, Tevfik Metin [1 ]
机构
[1] Koc Univ, Muhendisl Fak, Istanbul, Turkey
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Affect bursts, which are nonverbal expressions of emotions in conversations, play a critical role in analyzing affective states. Although there exist a number of methods on affect burst detection and recognition using only audio information, little effort has been spent for combining cues in a multimodal setup. We suggest that facial gestures constitute a key component to characterize affect bursts, and hence have potential for more robust affect burst detection and recognition. We take a data-driven approach to characterize affect bursts using Hidden Markov Models (HMI, and employ a multimodal decision fusion scheme that combines cues from audio and facial gestures for classification of affect bursts. We demonstrate the contribution of facial gestures to affect burst recognition by conducting experiments on an audiovisual database which comprise speech and facial motion data belonging to various dyadic conversations. Keywords: affect burst, multimodal recognition
引用
下载
收藏
页码:1608 / 1611
页数:4
相关论文
共 50 条
  • [1] Affect Burst Detection Using Multi-Modal Cues
    Turker, B. Berker
    Marzban, Shabbir
    Sezgin, M. Tevfik
    Yemez, Yucel
    Erzin, Engin
    2015 23RD SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2015, : 1006 - 1009
  • [2] Multi-Modal Anomaly Detection by Using Audio and Visual Cues
    Rehman, Ata-Ur
    Ullah, Hafiz Sami
    Farooq, Haroon
    Khan, Muhammad Salman
    Mahmood, Tayyeb
    Khan, Hafiz Owais Ahmed
    IEEE ACCESS, 2021, 9 : 30587 - 30603
  • [3] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2ND IEEE INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER CONTROL (ICACC 2010), VOL. 5, 2010, : 612 - 616
  • [4] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 720 - 723
  • [5] Face Recognition using Multi-modal Binary Patterns
    Thanh Phuong Nguyen
    Ngoc-Son Vu
    Caplier, Alice
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 2343 - 2346
  • [6] Facial emotion recognition using multi-modal information
    De Silva, LC
    Miyasato, T
    Nakatsu, R
    ICICS - PROCEEDINGS OF 1997 INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, VOLS 1-3: THEME: TRENDS IN INFORMATION SYSTEMS ENGINEERING AND WIRELESS MULTIMEDIA COMMUNICATIONS, 1997, : 397 - 401
  • [7] Multi-modal orientation cues in homing pigeons
    Walcott, C
    INTEGRATIVE AND COMPARATIVE BIOLOGY, 2005, 45 (03) : 574 - 581
  • [8] Transformer Encoder With Multi-Modal Multi-Head Attention for Continuous Affect Recognition
    Chen, Haifeng
    Jiang, Dongmei
    Sahli, Hichem
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 4171 - 4183
  • [9] Alone versus In-a-group: A Multi-modal Framework for Automatic Affect Recognition
    Mou, Wenxuan
    Gunes, Hatice
    Patras, Ioannis
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)
  • [10] MULTI-MODAL LEARNING FOR GESTURE RECOGNITION
    Cao, Congqi
    Zhang, Yifan
    Lu, Hanqing
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,