The uulmMAC Database-A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction

被引:20
|
作者
Hazer-Rau, Dilana [1 ]
Meudt, Sascha [2 ]
Daucher, Andreas [1 ]
Spohrs, Jennifer [1 ]
Hoffmann, Holger [1 ]
Schwenker, Friedhelm [2 ]
Traue, Harald C. [1 ]
机构
[1] Univ Ulm, Sect Med Psychol, Frauensteige 6, D-89075 Ulm, Germany
[2] Univ Ulm, Inst Neural Informat Proc, D-89081 Ulm, Germany
关键词
affective corpus; multimodal sensors; overload; underload; interest; frustration; cognitive load; emotion recognition; stress research; affective computing; machine learning; human-computer interaction; COGNITIVE LOAD; MENTAL WORKLOAD; EMOTION; QUESTIONNAIRE; TECHNOLOGIES;
D O I
10.3390/s20082308
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 x video, 3 x audio, and 7 x biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.
引用
收藏
页数:33
相关论文
共 50 条
  • [21] CORPUS OF SPOKEN ESTONIAN AND HUMAN-COMPUTER INTERACTION
    Hennoste, Tiit
    Gerassimenko, Olga
    Kasterpalu, Riina
    Koit, Mare
    Raabis, Andriela
    Strandson, Krista
    [J]. EESTI RAKENDUSLINGVISTIKA UHINGU AASTARAAMAT, 2009, 5 : 111 - 130
  • [22] Best of affective computing and intelligent interaction 2013 in multimodal interactions
    Soleymani, Mohammad
    Pun, Thierry
    Nijholt, Anton
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2015, 9 (01) : 1 - 2
  • [23] Unimodal and Multimodal Human Perception of Naturalistic Non-Basic Affective States during Human-Computer Interactions
    D'Mello, Sidney K.
    Dowell, Nia
    Graesser, Art
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2013, 4 (04) : 452 - 465
  • [24] Systematic Review of Multimodal Human-Computer Interaction
    Azofeifa, Jose Daniel
    Noguez, Julieta
    Ruiz, Sergio
    Molina-Espinosa, Jose Martin
    Magana, Alejandra J.
    Benes, Bedrich
    [J]. INFORMATICS-BASEL, 2022, 9 (01):
  • [25] Measuring Multimodal Synchrony for Human-Computer Interaction
    Reidsma, Dennis
    Nijholt, Anton
    Tschacher, Wolfgang
    Ramseyer, Fabian
    [J]. 2010 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW 2010), 2010, : 67 - 71
  • [26] Gaze tracking for multimodal human-computer interaction
    Stiefelhagen, R
    Yang, J
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 2617 - 2620
  • [27] Validating a multilingual and multimodal affective database
    Lopez, Juan Miguel
    Cearreta, Idoia
    Fajardo, Inmaculada
    Garay, Nestor
    [J]. USABILITY AND INTERNATIONALIZATION, PT 2, PROCEEDINGS: GLOBAL AND LOCAL USER INTERFACES, 2007, 4560 : 422 - +
  • [28] Affective state detection via facial expression analysis within a human-computer interaction context
    Samara, Anas
    Galway, Leo
    Bond, Raymond
    Wang, Hui
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2019, 10 (06) : 2175 - 2184
  • [29] AFFECTIVE AND EMOTIONAL ASPECTS OF HUMAN-COMPUTER INTERACTION: Game-Based and Innovative Learning Approaches
    Gulumbay, Askim
    [J]. TURKISH ONLINE JOURNAL OF DISTANCE EDUCATION, 2006, 7 (03):
  • [30] Affective and Emotional Aspects of Human-Computer Interaction: Game-Based and Innovative Learning Approaches
    Dabrowski, Marcin
    [J]. E-MENTOR, 2006, (03): : 79 - 80