Fusion Architectures for Multimodal Cognitive Load Recognition

被引:3
|
作者
Kindsvater, Daniel [1 ]
Meudt, Sascha [1 ]
Schwenker, Friedhelm [1 ]
机构
[1] Ulm Univ, Inst Neural Informat Proc, D-89069 Ulm, Germany
关键词
VOCAL EXPRESSIONS;
D O I
10.1007/978-3-319-59259-6_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge about the users emotional state is important to achieve human like, natural Human Computer Interaction (HCI) in modern technical systems. Humans rely on implicit signals like body gestures and posture, vocal changes (e.g. pitch) and mimic expressions when communicating. We investigate the relation between them and human emotion, specifically when completing easy or difficult tasks. Additionally we include physiological data which also differ in changes of cognitive load. We focus on discriminating between mental overload and mental underload, which can e.g. be useful in an e-tutorial system. Mental underload is a new term used to describe the state a person is in when completing a dull or boring task. It will be shown how to select suited features, build uni modal classifiers which then are combined to a multimodal mental load estimation by the use of Markov Fusion Networks (MFN) and Kalman Filter Fusion (KFF).
引用
收藏
页码:36 / 47
页数:12
相关论文
共 50 条
  • [1] Multimodal Data Fusion Architectures in Audiovisual Speech Recognition
    Sayed, Hadeer M.
    ElDeeb, Hesham E.
    Taiel, Shereen A.
    INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 2, WORLDCIST 2023, 2024, 800 : 655 - 667
  • [2] Multimodal fusion for pattern recognition
    Khan, Zubair
    Kumar, Shishir
    Garcia Reyes, Edel B.
    Mahanti, Prabhat
    PATTERN RECOGNITION LETTERS, 2018, 115 : 1 - 3
  • [3] Learner's cognitive state recognition based on multimodal physiological signal fusion
    Li, Yingting
    Li, Yue
    He, Xiuling
    Fang, Jing
    Zhou, Chongyang
    Liu, Chenxu
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [4] A MULTIMODAL BEHAVIOR RECOGNITION NETWORK WITH INTERCONNECTED ARCHITECTURES
    Long, Nuoer
    Un, Kin-Seong
    Xiong, Chengpeng
    Li, Zhuolin
    Chen, Shaobin
    Tan, Tao
    Lam, Chan-Tong
    Sun, Yue
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS, ICMEW 2024, 2024,
  • [5] Multimodal data fusion for object recognition
    Knyaz, Vladimir
    MULTIMODAL SENSING: TECHNOLOGIES AND APPLICATIONS, 2019, 11059
  • [6] Multimodal fusion recognition for digital twin
    Zhou, Tianzhe
    Zhang, Xuguang
    Kang, Bing
    Chen, Mingkai
    DIGITAL COMMUNICATIONS AND NETWORKS, 2024, 10 (02) : 337 - 346
  • [7] Multimodal fusion recognition for digital twin
    Tianzhe Zhou
    Xuguang Zhang
    Bing Kang
    Mingkai Chen
    Digital Communications and Networks, 2024, 10 (02) : 337 - 346
  • [8] Fusion Mappings for Multimodal Affect Recognition
    Kaechele, Markus
    Schels, Martin
    Thiam, Patrick
    Schwenker, Friedhelm
    2015 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2015, : 307 - 313
  • [9] Multimodal Emotion Recognition using Deep Learning Architectures
    Ranganathan, Hiranmayi
    Chakraborty, Shayok
    Panchanathan, Sethuraman
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [10] Multimodal Fusion for Cognitive Load Measurement in an Adaptive Virtual Reality Driving Task for Autism Intervention
    Zhang, Lian
    Wade, Joshua
    Bian, Dayi
    Fan, Jing
    Swanson, Amy
    Weitlauf, Amy
    Warren, Zachary
    Sarkar, Nilanjan
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: ACCESS TO LEARNING, HEALTH AND WELL-BEING, UAHCI 2015, PT III, 2015, 9177 : 709 - 720