SKEAFN: Sentiment Knowledge Enhanced Attention Fusion Network for multimodal sentiment analysis

被引:15
|
作者
Zhu, Chuanbo [1 ]
Chen, Min [2 ,3 ]
Zhang, Sheng [1 ]
Sun, Chao [1 ]
Liang, Han [1 ]
Liu, Yifan [1 ]
Chen, Jincai [1 ,2 ,4 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan Natl Lab Optoelect, Wuhan 430074, Hubei, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Hubei, Peoples R China
[3] Huazhong Univ Sci & Technol, Embedded & Pervas Comp EP Lab, Wuhan 430074, Hubei, Peoples R China
[4] Minist Educ China, Engn Res Ctr Data Storage Syst & Technol, Key Lab Informat Storage Syst, Wuhan 430074, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-view learning; Multiple feature fusion; Multimodal sentiment analysis; External knowledge; Multi-head attention;
D O I
10.1016/j.inffus.2023.101958
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal sentiment analysis is an active research field that aims to recognize the user's sentiment information from multimodal data. The primary challenge in this field is to develop a high-quality fusion framework that effectively addresses the heterogeneity among different modalities. However, prior research has primarily concentrated on intermodal interactions while neglecting the semantic sentiment information conveyed by words in the text modality. In this paper, we propose the Sentiment Knowledge Enhanced Attention Fusion Network (SKEAFN), a novel end-to-end fusion network that enhances multimodal fusion by incorporating additional sentiment knowledge representations from an external knowledge base. Firstly, we construct an external knowledge enhancement module to acquire additional representations for the text modality. Then, we design a text-guided interaction module that facilitates the interaction between text and the visual/acoustic modality. Finally, we propose a feature-wised attention fusion module that achieves multimodal fusion by dynamically adjusting the weights of the additional and each modality's representations. We evaluate our method on three challenging multimodal sentiment analysis datasets: CMU-MOSI, CMU-MOSEI, and Twitter2019. The experiment results demonstrate that our model significantly outperforms the state-of-the-art models. The source code is publicly available at https://github.com/doubibobo/SKEAFN.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Attention-Based Fusion of Intra- and Intermodal Dynamics in Multimodal Sentiment Analysis
    Yaghoubi, Ehsan
    Tran, Tuyet Kim
    Borza, Diana
    Frintrop, Simone
    2024 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS, PERCOM WORKSHOPS, 2024, : 273 - 278
  • [42] Entity-Sensitive Attention and Fusion Network for Entity-Level Multimodal Sentiment Classification
    Yu, Jianfei
    Jiang, Jing
    Xia, Rui
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 429 - 439
  • [43] Multimodal Sentiment Analysis Using BiGRU and Attention-Based Hybrid Fusion Strategy
    Liu, Zhizhong
    Zhou, Bin
    Meng, Lingqiang
    Huang, Guangyu
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 37 (02): : 1963 - 1981
  • [44] Aspect-level multimodal sentiment analysis based on co-attention fusion
    Wang, Shunjie
    Cai, Guoyong
    Lv, Guangrui
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [45] Self-adaptive attention fusion for multimodal aspect-based sentiment analysis
    Wang, Ziyue
    Guo, Junjun
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2024, 21 (01) : 1305 - 1320
  • [46] Multimodal Sentiment Analysis Representations Learning via Contrastive Learning with Condense Attention Fusion
    Wang, Huiru
    Li, Xiuhong
    Ren, Zenyu
    Wang, Min
    Ma, Chunming
    SENSORS, 2023, 23 (05)
  • [47] Scanning, attention, and reasoning multimodal content for sentiment analysis
    Liu, Yun
    Li, Zhoujun
    Zhou, Ke
    Zhang, Leilei
    Li, Lang
    Tian, Peng
    Shen, Shixun
    KNOWLEDGE-BASED SYSTEMS, 2023, 268
  • [48] Multimodal Sentiment Analysis Method Based on Hierarchical Adaptive Feature Fusion Network
    Zhang, Huchao
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2024, 20 (01)
  • [49] A graph convolution-based heterogeneous fusion network for multimodal sentiment analysis
    Zhao, Tong
    Peng, Junjie
    Huang, Yansong
    Wang, Lan
    Zhang, Huiran
    Cai, Zesu
    APPLIED INTELLIGENCE, 2023, 53 (24) : 30469 - 30481
  • [50] Sarcasm driven by sentiment: A sentiment-aware hierarchical fusion network for multimodal sarcasm detection
    Liu, Hao
    Wei, Runguo
    Tu, Geng
    Lin, Jiali
    Liu, Cheng
    Jiang, Dazhi
    INFORMATION FUSION, 2024, 108