A New Multi-modal Dataset for Human Affect Analysis

被引:0
|
作者
Wei, Haolin [1 ]
Monaghan, David S. [1 ]
O'Connor, Noel E. [1 ]
Scanlon, Patricia [2 ]
机构
[1] Dublin City Univ, Insight Ctr Data Analyt, Dublin 9, Ireland
[2] Alcatel Lucent Dublin, Bell Labs Ireland, Dublin, Ireland
来源
关键词
Spontaneous affect dataset; Continuous annotation; Multi-modal; Depth; Affect recognition; RECOGNITION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we present a new multi-modal dataset of spontaneous three way human interactions. Participants were recorded in an unconstrained environment at various locations during a sequence of debates in a video conference, Skype style arrangement. An additional depth modality was introduced, which permitted the capture of 3D information in addition to the video and audio signals. The dataset consists of 16 participants and is subdivided into 6 unique sections. The dataset was manually annotated on a continuously scale across 5 different affective dimensions including arousal, valence, agreement, content and interest. The annotation was performed by three human annotators with the ensemble average calculated for use in the dataset. The corpus enables the analysis of human affect during conversations in a real life scenario. We first briefly reviewed the existing affect dataset and the methodologies related to affect dataset construction, then we detailed how our unique dataset was constructed.
引用
下载
收藏
页码:42 / 51
页数:10
相关论文
共 50 条
  • [31] MOFA: A novel dataset for Multi-modal Image Fusion Applications
    Xiao, Kaihua
    Kang, Xudong
    Liu, Haibo
    Duan, Puhong
    INFORMATION FUSION, 2023, 96 : 144 - 155
  • [32] Multi-modal Gesture Recognition Challenge 2013: Dataset and Results
    Escalera, Sergio
    Gonzalez, Jordi
    Baro, Xavier
    Reyes, Miguel
    Lopes, Oscar
    Guyon, Isabelle
    Athitsos, Vassilis
    Escalante, Hugo J.
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 445 - 452
  • [33] AFFECT BURST RECOGNITION USING MULTI-MODAL CUES
    Turker, Bekir Berker
    Marzban, Shabbir
    Erzin, Engin
    Yemez, Yucel
    Sezgin, Tevfik Metin
    2014 22ND SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2014, : 1608 - 1611
  • [34] Multi-modal human identification system
    Ivanov, Y
    WACV 2005: SEVENTH IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION, PROCEEDINGS, 2005, : 164 - 170
  • [35] The origin of human multi-modal communication
    Levinson, Stephen C.
    Holler, Judith
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1651)
  • [36] Multi-modal human aggression detection
    Kooij, J. F. P.
    Liem, M. C.
    Krijnders, J. D.
    Andringa, T. C.
    Gavrila, D. M.
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 144 : 106 - 120
  • [37] Affect Burst Detection Using Multi-Modal Cues
    Turker, B. Berker
    Marzban, Shabbir
    Sezgin, M. Tevfik
    Yemez, Yucel
    Erzin, Engin
    2015 23RD SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2015, : 1006 - 1009
  • [38] Multi-modal analysis of human motion from external measurements
    Dariush, B
    Hemami, H
    Parnianpour, M
    JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME, 2001, 123 (02): : 272 - 278
  • [39] OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over
    Stephan, Benedict
    Koehler, Mona
    Mueller, Steffen
    Zhang, Yan
    Gross, Horst-Michael
    Notni, Gunther
    SENSORS, 2023, 23 (18)
  • [40] Multi-Modal Deep Analysis for Multimedia
    Zhu, Wenwu
    Wang, Xin
    Li, Hongzhi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (10) : 3740 - 3764