TEDT: Transformer-Based Encoding–Decoding Translation Network for Multimodal Sentiment Analysis

被引:0
|
作者
Fan Wang
Shengwei Tian
Long Yu
Jing Liu
Junwen Wang
Kun Li
Yongtao Wang
机构
[1] University of Xinjiang,School of Software
[2] University of Xinjiang,Network and Information Center
来源
Cognitive Computation | 2023年 / 15卷
关键词
Multimodal sentiment analysis; Transformer; Multimodal fusion; Multimodal attention;
D O I
暂无
中图分类号
学科分类号
摘要
Multimodal sentiment analysis is a popular and challenging research topic in natural language processing, but the impact of individual modal data in videos on sentiment analysis results can be different. In the temporal dimension, natural language sentiment is influenced by nonnatural language sentiment, which may enhance or weaken the original sentiment of the current natural language. In addition, there is a general problem of poor quality of nonnatural language features, which essentially hinders the effect of multimodal fusion. To address the above issues, we proposed a multimodal encoding–decoding translation network with a transformer and adopted a joint encoding–decoding method with text as the primary information and sound and image as the secondary information. To reduce the negative impact of nonnatural language data on natural language data, we propose a modality reinforcement cross-attention module to convert nonnatural language features into natural language features to improve their quality and better integrate multimodal features. Moreover, the dynamic filtering mechanism filters out the error information generated in the cross-modal interaction to further improve the final output. We evaluated the proposed method on two multimodal sentiment analysis benchmark datasets (MOSI and MOSEI), and the accuracy of the method was 89.3% and 85.9%, respectively. In addition, our method outperformed the current state-of-the-art methods. Our model can greatly improve the effect of multimodal fusion and more accurately analyze human sentiment.
引用
收藏
页码:289 / 303
页数:14
相关论文
共 50 条
  • [1] TEDT: Transformer-Based Encoding-Decoding Translation Network for Multimodal Sentiment Analysis
    Wang, Fan
    Tian, Shengwei
    Yu, Long
    Liu, Jing
    Wang, Junwen
    Li, Kun
    Wang, Yongtao
    COGNITIVE COMPUTATION, 2023, 15 (01) : 289 - 303
  • [2] MEDT: Using Multimodal Encoding-Decoding Network as in Transformer for Multimodal Sentiment Analysis
    Qi, Qingfu
    Lin, Liyuan
    Zhang, Rui
    Xue, Chengrong
    IEEE ACCESS, 2022, 10 : 28750 - 28759
  • [3] Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis
    Yuan, Ziqi
    Li, Wei
    Xu, Hua
    Yu, Wenmeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4400 - 4407
  • [4] Transformer-based adaptive contrastive learning for multimodal sentiment analysis
    Hu Y.
    Huang X.
    Wang X.
    Lin H.
    Zhang R.
    Multimedia Tools and Applications, 2025, 84 (3) : 1385 - 1402
  • [5] TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
    Huang, Jiehui
    Zhou, Jun
    Tang, Zhenchao
    Lin, Jiaying
    Chen, Calvin Yu-Chian
    KNOWLEDGE-BASED SYSTEMS, 2024, 285
  • [6] Transformer-Based Graph Convolutional Network for Sentiment Analysis
    AlBadani, Barakat
    Shi, Ronghua
    Dong, Jian
    Al-Sabri, Raeed
    Moctard, Oloulade Babatounde
    APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [7] A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
    Delbrouck, Jean-Benoit
    Tits, Noe
    Brousmiche, Mathilde
    Dupont, Stephane
    PROCEEDINGS OF THE SECOND GRAND CHALLENGE AND WORKSHOP ON MULTIMODAL LANGUAGE (CHALLENGE-HML), VOL 1, 2020, : 1 - 7
  • [8] Transformer-Based Unified Neural Network for Quality Estimation and Transformer-Based Re-decoding Model for Machine Translation
    Chen, Cong
    Zong, Qinqin
    Luo, Qi
    Qiu, Bailian
    Li, Maoxi
    MACHINE TRANSLATION, CCMT 2020, 2020, 1328 : 66 - 75
  • [9] Transformer-based correlation mining network with self-supervised label generation for multimodal sentiment analysis
    Wang, Ruiqing
    Yang, Qimeng
    Tian, Shengwei
    Yu, Long
    He, Xiaoyu
    Wang, Bo
    NEUROCOMPUTING, 2025, 618
  • [10] Transformer-Based Physiological Feature Learning for Multimodal Analysis of Self-Reported Sentiment
    Katada, Shun
    Okada, Shogo
    Komatani, Kazunori
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 349 - 358