Multi-level context extraction and attention-based contextual inter-modal fusion for multimodal sentiment analysis and emotion classification

被引:0
|
作者
Mahesh G. Huddar
Sanjeev S. Sannakki
Vijay S. Rajpurohit
机构
[1] Hirasugar Institute of Technology,Department of Computer Science and Engineering
[2] Gogte Institute of Technology,Department of Computer Science and Engineering
关键词
Attention model; Inter-modal fusion; Multi-level contextual information; Bidirectional recurrent neural network;
D O I
暂无
中图分类号
学科分类号
摘要
The recent advancements in the Internet technology and its associated services, led the users to post a large amount of multimodal data into social media Web sites, online shopping portals, video repositories, etc. The availability of the huge amount of multimodal content, multimodal sentiment classification, and affective computing has become the most researched topic. The extraction of context among the neighboring utterances and considering the importance of inter-modal utterances before multimodal fusion are the most important research issues in this field. This article presents a novel approach to extract the context at multiple levels and to understand the importance of inter-modal utterances in sentiment and emotion classification. Experiments are conducted on two publically accepted datasets such as CMU-MOSI for sentiment analysis and IEMOCAP for emotion classification. By incorporating the utterance-level contextual information and importance of inter-modal utterances, the proposed model outperforms the standard baselines by over 3% in classification accuracy.
引用
收藏
页码:103 / 112
页数:9
相关论文
共 50 条
  • [21] AtCAF: Attention-based causality-aware fusion network for multimodal sentiment analysis
    Huang, Changqin
    Chen, Jili
    Huang, Qionghao
    Wang, Shijin
    Tu, Yaxin
    Huang, Xiaodi
    INFORMATION FUSION, 2025, 114
  • [22] A Self-Attention-Based Multi-Level Fusion Network for Aspect Category Sentiment Analysis
    Dong Tian
    Jia Shi
    Jianying Feng
    Cognitive Computation, 2023, 15 : 1372 - 1390
  • [23] A Self-Attention-Based Multi-Level Fusion Network for Aspect Category Sentiment Analysis
    Tian, Dong
    Shi, Jia
    Feng, Jianying
    COGNITIVE COMPUTATION, 2023, 15 (04) : 1372 - 1390
  • [24] Multi-task Gated Contextual Cross-Modal Attention Framework for Sentiment and Emotion Analysis
    Sangwan, Suyash
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV, 2019, 1142 : 662 - 669
  • [25] Attention-based interactive multi-level feature fusion for named entity recognition
    Xu, Yiwu
    Chen, Yun
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [26] Attention-based Multi-Level Fusion Network for Light Field Depth Estimation
    Chen, Jiaxin
    Zhang, Shuo
    Lin, Youfang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1009 - 1017
  • [27] Multi-level Multi-task representation learning with adaptive fusion for multimodal sentiment analysis
    Chuanbo Zhu
    Min Chen
    Haomin Li
    Sheng Zhang
    Han Liang
    Chao Sun
    Yifan Liu
    Jincai Chen
    Neural Computing and Applications, 2025, 37 (3) : 1491 - 1508
  • [28] Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding
    Li, Hao
    Yang, Tiantian
    Wang, Runxiang
    Li, Cuichun
    Zhou, Shuyu
    Guo, Xiqing
    SENSORS, 2025, 25 (03)
  • [29] AB-GRU: An attention-based bidirectional GRU model for multimodal sentiment fusion and analysis
    Wu, Jun
    Zheng, Xinli
    Wang, Jiangpeng
    Wu, Junwei
    Wang, Ji
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (10) : 18523 - 18544
  • [30] Multi-level attention fusion network assisted by relative entropy alignment for multimodal speech emotion recognition
    Lei, Jianjun
    Wang, Jing
    Wang, Ying
    APPLIED INTELLIGENCE, 2024, 54 (17-18) : 8478 - 8490