AFFECT BURST RECOGNITION USING MULTI-MODAL CUES

被引:0
|
作者
Turker, Bekir Berker [1 ]
Marzban, Shabbir [1 ]
Erzin, Engin [1 ]
Yemez, Yucel [1 ]
Sezgin, Tevfik Metin [1 ]
机构
[1] Koc Univ, Muhendisl Fak, Istanbul, Turkey
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Affect bursts, which are nonverbal expressions of emotions in conversations, play a critical role in analyzing affective states. Although there exist a number of methods on affect burst detection and recognition using only audio information, little effort has been spent for combining cues in a multimodal setup. We suggest that facial gestures constitute a key component to characterize affect bursts, and hence have potential for more robust affect burst detection and recognition. We take a data-driven approach to characterize affect bursts using Hidden Markov Models (HMI, and employ a multimodal decision fusion scheme that combines cues from audio and facial gestures for classification of affect bursts. We demonstrate the contribution of facial gestures to affect burst recognition by conducting experiments on an audiovisual database which comprise speech and facial motion data belonging to various dyadic conversations. Keywords: affect burst, multimodal recognition
引用
收藏
页码:1608 / 1611
页数:4
相关论文
共 50 条
  • [31] Fusing Multi-modal Features for Gesture Recognition
    Wu, Jiaxiang
    Cheng, Jian
    Zhao, Chaoyang
    Lu, Hanqing
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 453 - 459
  • [32] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [33] Multi-modal vertebrae recognition using Transformed Deep Convolution Network
    Cai, Yunliang
    Landis, Mark
    Laidley, David T.
    Kornecki, Anat
    Lum, Andrea
    Li, Shuo
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2016, 51 : 11 - 19
  • [34] Multi-modal Emotion Recognition Based on Hypergraph
    Zong L.-L.
    Zhou J.-H.
    Xie Q.-J.
    Zhang X.-C.
    Xu B.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (12): : 2520 - 2534
  • [35] Multi-modal Sensing for Human Activity Recognition
    Bruno, Barbara
    Grosinger, Jasmin
    Mastrogiovanni, Fulvio
    Pecora, Federico
    Saffiotti, Alessandro
    Sathyakeerthy, Subhash
    Sgorbissa, Antonio
    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 594 - 600
  • [36] Comparison on Multi-modal Biometric Recognition Method
    Amirthalingam, Gandhimathi
    Govindaraju, Radhamani
    COMPUTATIONAL INTELLIGENCE AND INFORMATION TECHNOLOGY, 2011, 250 : 524 - +
  • [37] Evaluation and Discussion of Multi-modal Emotion Recognition
    Rabie, Ahmad
    Wrede, Britta
    Vogt, Thurid
    Hanheide, Marc
    SECOND INTERNATIONAL CONFERENCE ON COMPUTER AND ELECTRICAL ENGINEERING, VOL 1, PROCEEDINGS, 2009, : 598 - +
  • [38] ModDrop: Adaptive Multi-Modal Gesture Recognition
    Neverova, Natalia
    Wolf, Christian
    Taylor, Graham
    Nebout, Florian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (08) : 1692 - 1706
  • [39] Cybersecurity Named Entity Recognition Using Multi-Modal Ensemble Learning
    Yi, Feng
    Jiang, Bo
    Wang, Lu
    Wu, Jianjun
    IEEE ACCESS, 2020, 8 : 63214 - 63224
  • [40] Multi-modal Emotion Recognition using Speech Features and Text Embedding
    Kim J.-H.
    Lee S.-P.
    Transactions of the Korean Institute of Electrical Engineers, 2021, 70 (01): : 108 - 113