Multimodal Framework for Analyzing the Affect of a Group of People

被引:19
|
作者
Huang, Xiaohua [1 ]
Dhall, Abhinav [2 ]
Goecke, Roland [3 ]
Pietikainen, Matti [1 ]
Zhao, Guoying [1 ,4 ]
机构
[1] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
[2] Indian Inst Technol Ropar, Dept Comp Sci & Engn, Rupnagar 140001, India
[3] Univ Canberra, Human Ctr Technol Res Ctr, Bruce, ACT 2617, Australia
[4] Northwest Univ, Sch Informat & Technol, Xian 710069, Shaanxi, Peoples R China
基金
芬兰科学院; 中国国家自然科学基金;
关键词
Facial expression recognition; Group-level emotion recognition; Feature descriptor; Information aggregation; Multi-modality; TEXTURE CLASSIFICATION; REPRESENTATION; EMOTIONS; MODEL;
D O I
10.1109/TMM.2018.2818015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the advances in multimedia and the world wide web, users upload millions of images and videos everyone on social networking platforms on the Internet. From the perspective of automatic human behavior understanding, it is of interest to analyze and model the affects that are exhibited by groups of people who are participating in social events in these images. However, the analysis of the affect that is expressed by multiple people is challenging due to the varied indoor and outdoor settings. Recently, a few interesting works have investigated face-based group-level emotion recognition (GER). In this paper, we propose a multimodal framework for enhancing the affective analysis ability of GER in challenging environments. Specifically, for encoding a person's information in a group-level image, we first propose an information aggregation method for generating feature descriptions of face, upper body, and scene. Later, we revisit localized multiple kernel learning for fusing face, upper body, and scene information for GER against challenging environments. Intensive experiments are performed on two challenging group-level emotion databases (HAPPEI and GAFF) to investigate the roles of the face, upper body, scene information, and the multimodal framework. Experimental results demonstrate that the multimodal framework achieves promising performance for GER.
引用
收藏
页码:2706 / 2721
页数:16
相关论文
共 50 条
  • [41] What People Look at in Multimodal Online Dating Profiles: How Pictorial and Textual Cues Affect Impression Formation
    van der Zanden, Tess
    Mos, Maria B. J.
    Schouten, Alexander P.
    Krahmer, Emiel J.
    COMMUNICATION RESEARCH, 2022, 49 (06) : 863 - 890
  • [42] Multimodal Framework for Mobile Interaction
    Cutugno, Francesco
    Leano, Vincenza Anna
    Mignini, Gianluca
    Rinaldi, Roberto
    PROCEEDINGS OF THE INTERNATIONAL WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES, 2012, : 197 - 203
  • [43] Design of multimodal interface framework
    Lee, Yong-Hee
    Lee, Dong-Woo
    Choi, Eun-Jung
    Park, Jun-Seok
    9TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY: TOWARD NETWORK INNOVATION BEYOND EVOLUTION, VOLS 1-3, 2007, : 345 - +
  • [44] Alone versus In-a-group: A Multi-modal Framework for Automatic Affect Recognition
    Mou, Wenxuan
    Gunes, Hatice
    Patras, Ioannis
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)
  • [45] Multimodal mapping - a methodological framework*
    Palipane, Kelum
    JOURNAL OF ARCHITECTURE, 2019, 24 (01): : 91 - 113
  • [46] A unified framework for multimodal retrieval
    Rafailidis, D.
    Manolopoulou, S.
    Daras, P.
    PATTERN RECOGNITION, 2013, 46 (12) : 3358 - 3370
  • [47] Reflection/Commentary on a Past Article: "A Qualitative Framework for Collecting and Analyzing Data in Focus Group Research"
    Onwuegbuzie, Anthony J.
    INTERNATIONAL JOURNAL OF QUALITATIVE METHODS, 2018, 17 (01):
  • [48] An Evaluation Framework for Multimodal Interaction
    Krishnaswamy, Nikhil
    Pustejovsky, James
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 2127 - 2134
  • [49] An evaluation of a vocational group for people with mental health problems based on The WORKS framework
    Hitch, Danielle
    Robertson, Joanne
    Ochoteco, Hanno
    McNeill, Frank
    Williams, Anne
    Lhuede, Kate
    Baini, Adele
    Hillman, Alexandra
    Fossey, Ellie
    BRITISH JOURNAL OF OCCUPATIONAL THERAPY, 2017, 80 (12) : 717 - 725
  • [50] FAB : Framework for Analyzing Benchmarks
    Gohil, Varun
    Singh, Shreyas
    Awasthi, Manu
    COMPANION OF THE 2019 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE '19), 2019, : 33 - 36