Research on Emotion Recognition Method of Flight Training Based on Multimodal Fusion

被引:0
|
作者
Wang, Wendong [1 ]
Zhang, Haoyang [1 ]
Zhang, Zhibin [1 ]
机构
[1] Northwestern Polytech Univ, Sch Mech Engn, Xian, Peoples R China
关键词
Emotion recognition; intelligent perception; adaptive dynamic fusion; multimodal fusion; ALGORITHMS; FEATURES; MACHINE;
D O I
10.1080/10447318.2023.2254644
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The emotional activities of the human body are mainly regulated by the autonomic nervous system, the central nervous system, and the advanced cognition of the human brain. This paper proposes an emotional state recognition method in pilot training tasks based on multimodal information fusion. A set of emotion perception recognition systems, a two-dimensional valence-arousal emotional model, and a multimodal information intelligent perception model were established based on the human-computer interaction mode during flight training; an intelligent perception system was designed to collect and intelligently perceive four kinds of peripheral physiological signals of pilots in real-time. And based on traditional machine learning models, a binary tree support vector machine was designed to optimize and improve the multimodal information co-integration decision model, which increased the accuracy of emotional state recognition in flight training by 37.58% on average. The experimental result showed that it realizes accurate monitoring and identification of real-time emotional state, helps to improve the effect of flight training and flight safety, maintains the efficiency of operation, and has important research significance and application prospects in the field of pilot training.
引用
下载
收藏
页码:6478 / 6491
页数:14
相关论文
共 50 条
  • [31] Multimodal transformer augmented fusion for speech emotion recognition
    Wang, Yuanyuan
    Gu, Yu
    Yin, Yifei
    Han, Yingping
    Zhang, He
    Wang, Shuang
    Li, Chenyu
    Quan, Dou
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [32] Context-aware Multimodal Fusion for Emotion Recognition
    Li, Jinchao
    Wang, Shuai
    Chao, Yang
    Liu, Xunying
    Meng, Helen
    INTERSPEECH 2022, 2022, : 2013 - 2017
  • [33] Multimodal emotion recognition for the fusion of speech and EEG signals
    Ma J.
    Sun Y.
    Zhang X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (01): : 143 - 150
  • [34] Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
    Huang, Yongrui
    Yang, Jianhao
    Liao, Pengkai
    Pan, Jiahui
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2017, 2017
  • [35] Review on Multimodal Fusion Techniques for Human Emotion Recognition
    Karani, Ruhina
    Desai, Sharmishta
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (10) : 287 - 296
  • [36] Multimodal Physiological Signals Fusion for Online Emotion Recognition
    Pan, Tongjie
    Ye, Yalan
    Cai, Hecheng
    Huang, Shudong
    Yang, Yang
    Wang, Guoqing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5879 - 5888
  • [37] Video multimodal emotion recognition based on Bi-GRU and attention fusion
    Huan, Ruo-Hong
    Shu, Jia
    Bao, Sheng-Lin
    Liang, Rong-Hua
    Chen, Peng
    Chi, Kai-Kai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (06) : 8213 - 8240
  • [38] Multimodal Emotion Recognition With Transformer-Based Self Supervised Feature Fusion
    Siriwardhana, Shamane
    Kaluarachchi, Tharindu
    Billinghurst, Mark
    Nanayakkara, Suranga
    IEEE ACCESS, 2020, 8 (08): : 176274 - 176285
  • [39] Multimodal Emotion Recognition Using Feature Fusion: An LLM-Based Approach
    Chandraumakantham, Omkumar
    Gowtham, N.
    Zakariah, Mohammed
    Almazyad, Abdulaziz
    IEEE ACCESS, 2024, 12 : 108052 - 108071
  • [40] Hierarchical Attention-Based Multimodal Fusion Network for Video Emotion Recognition
    Liu, Xiaodong
    Li, Songyang
    Wang, Miao
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021