Multimodal emotion recognition algorithm based on edge network emotion element compensation and data fusion

被引:5
|
作者
Wang, Yu [1 ,2 ]
机构
[1] Henan Univ Engn, Coll Comp Sci, Zhengzhou 451191, Henan, Peoples R China
[2] State Key Lab Math Engn & Adv Comp, Zhengzhou 450002, Henan, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Edge network; Multimodal; Emotion compensation; Data fusion;
D O I
10.1007/s00779-018-01195-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The data feature set of emotion recognition based on complex network has the characteristics of complex redundant information, difficult recognition and lost data, so it will cause great interference to the emotion feature of speech or image recognition. In order to solve the above problems, this paper studies the multi-modal emotion recognition algorithm based on emotion element compensation in the background of streaming media communication in edge network. Firstly, an edge streaming media network is designed to transfer the traditional server-centric transmission tasks to edge nodes. The architecture can transform complex network problems into edge nodes and user side problems. Secondly, the multi-modal parallel training is realized by using the cooperative combination of weights equalization, and the reasoning of nonlinear mapping is mapped to a better emotional data fusion relationship. Then, from the point of view of non-linearity and uncertainty of different types of emotional data samples in the training subset, emotional recognition data compensation evolves into emotional element compensation, which is convenient for qualitative analysis and optimal decision-making. Finally, the simulation results show that the proposed multi-modal emotion recognition algorithm can improve the recognition rate by 3.5%, save the average response time by 5.7% and save the average number of iterations per unit time by 1.35 times.
引用
收藏
页码:383 / 392
页数:10
相关论文
共 50 条
  • [1] Multimodal emotion recognition algorithm based on edge network emotion element compensation and data fusion
    Yu Wang
    [J]. Personal and Ubiquitous Computing, 2019, 23 : 383 - 392
  • [2] Emotion Recognition Based on Feedback Weighted Fusion of Multimodal Emotion Data
    Wei, Wei
    Jia, Qingxuan
    Feng, Yongli
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), 2017, : 1682 - 1687
  • [3] Multimodal Emotion Recognition Based on Feature Fusion
    Xu, Yurui
    Wu, Xiao
    Su, Hang
    Liu, Xiaorui
    [J]. 2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 7 - 11
  • [4] Multimodal music emotion recognition method based on multi data fusion
    Zeng, Fanguang
    [J]. INTERNATIONAL JOURNAL OF ARTS AND TECHNOLOGY, 2023, 14 (04) : 271 - 282
  • [5] An Algorithm of Emotion Recognition And Valence of Drivers on Multimodal Data
    Guo, Lu
    Shen, Yun
    Ding, Peng
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2022,
  • [6] Audio-Visual Fusion Network Based on Conformer for Multimodal Emotion Recognition
    Guo, Peini
    Chen, Zhengyan
    Li, Yidi
    Liu, Hong
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2022, PT II, 2022, 13605 : 315 - 326
  • [7] Hierarchical Attention-Based Multimodal Fusion Network for Video Emotion Recognition
    Liu, Xiaodong
    Li, Songyang
    Wang, Miao
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021
  • [8] GraphMFT: A graph network based multimodal fusion technique for emotion recognition in conversation
    Li, Jiang
    Wang, Xiaoping
    Lv, Guoqing
    Zeng, Zhigang
    [J]. NEUROCOMPUTING, 2023, 550
  • [9] HYBRID FUSION BASED APPROACH FOR MULTIMODAL EMOTION RECOGNITION WITH INSUFFICIENT LABELED DATA
    Kumar, Puneet
    Khokher, Vedanti
    Gupta, Yukti
    Raman, Balasubramanian
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 314 - 318
  • [10] Canonical Correlation Analysis for Data Fusion in Multimodal Emotion Recognition
    Nemati, Shahla
    [J]. 2018 9TH INTERNATIONAL SYMPOSIUM ON TELECOMMUNICATIONS (IST), 2018, : 676 - 681