Alone versus In-a-group: A Multi-modal Framework for Automatic Affect Recognition

被引:12
|
作者
Mou, Wenxuan [1 ]
Gunes, Hatice [2 ]
Patras, Ioannis [1 ]
机构
[1] Queen Mary Univ London, London, England
[2] Univ Cambridge, Cambridge, England
基金
英国工程与自然科学研究理事会;
关键词
Affect analysis; multimodal interaction; group settings; non-verbal behaviours; context analysis; FACIAL-EXPRESSION; FACE;
D O I
10.1145/3321509
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recognition and analysis of human affect has been researched extensively within the field of computer science in the past two decades. However, most of the past research in automatic analysis of human affect has focused on the recognition of affect displayed by people in individual settings and little attention has been paid to the analysis of the affect expressed in group settings. In this article, we first analyze the affect expressed by each individual in terms of arousal and valence dimensions in both individual and group videos and then propose methods to recognize the contextual information, i.e., whether a person is alone or in-agroup by analyzing their face and body behavioral cues. For affect analysis, we first devise affect recognition models separately in individual and group videos and then introduce a cross-condition affect recognition model that is trained by combining the two different types of data. We conduct a set of experiments on two datasets that contain both individual and group videos. Our experiments show that (1) the proposed Volume Quantized Local Zernike Moments Fisher Vector outperforms other unimodal features in affect analysis; (2) the temporal learning model, Long-Short Term Memory Networks, works better than the static learning model, Support Vector Machine; (3) decision fusion helps to improve affect recognition, indicating that body behaviors carry emotional information that is complementary rather than redundant to the emotion content in facial behaviors; and (4) it is possible to predict the context, i.e., whether a person is alone or in-a-group, using their non-verbal behavioral cues.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] AFFECT BURST RECOGNITION USING MULTI-MODAL CUES
    Turker, Bekir Berker
    Marzban, Shabbir
    Erzin, Engin
    Yemez, Yucel
    Sezgin, Tevfik Metin
    2014 22ND SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2014, : 1608 - 1611
  • [2] Automatic Group Cohesiveness Detection With Multi-modal Features
    Zhu, Bin
    Guo, Xin
    Barner, Kenneth E.
    Boncelet, Charles
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 577 - 581
  • [3] A Unified Framework for Multi-Modal Isolated Gesture Recognition
    Duan, Jiali
    Wan, Jun
    Zhou, Shuai
    Guo, Xiaoyuan
    Li, Stan Z.
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (01)
  • [4] Group interaction through a multi-modal haptic framework
    Le, Huang H.
    Loomes, Martin J.
    Loureiro, Rui C. V.
    12TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS - IE 2016, 2016, : 62 - 67
  • [5] A Multi-Modal Hashing Learning Framework for Automatic Image Annotation
    Wang, Jiale
    Li, Guohui
    2017 IEEE SECOND INTERNATIONAL CONFERENCE ON DATA SCIENCE IN CYBERSPACE (DSC), 2017, : 14 - 21
  • [6] Vision-Based Multi-Modal Framework for Action Recognition
    Romaissa, Beddiar Djamila
    Mourad, Oussalah
    Brahim, Nini
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5859 - 5866
  • [7] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2ND IEEE INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER CONTROL (ICACC 2010), VOL. 5, 2010, : 612 - 616
  • [8] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 720 - 723
  • [9] Adaptive Automatic Object Recognition in Single and Multi-Modal Sensor Data
    Khuon, Timothy
    Rand, Robert
    Truslow, Eric
    2014 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2014,
  • [10] Automatic music mood classification using multi-modal attention framework
    Sujeesha, A. S.
    Mala, J. B.
    Rajan, Rajeev
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 128