Multimodal Sentiment Analysis Based on Interactive Transformer and Soft Mapping

被引:5
|
作者
Li, Zuhe [1 ,2 ]
Guo, Qingbing [1 ]
Feng, Chengyao [3 ]
Deng, Lujuan [1 ]
Zhang, Qiuwen [1 ]
Zhang, Jianwei [2 ]
Wang, Fengqin [1 ]
Sun, Qian [1 ]
机构
[1] Zhengzhou Univ Light Ind, Sch Comp & Commun Engn, Zhengzhou 450002, Peoples R China
[2] Zhengzhou Univ Light Ind, Henan Key Lab Food Safety Data Intelligence, Zhengzhou 450002, Peoples R China
[3] Brandeis High Sch, San Antonio, TX 78249 USA
来源
WIRELESS COMMUNICATIONS & MOBILE COMPUTING | 2022年 / 2022卷
基金
中国国家自然科学基金;
关键词
FUSION;
D O I
10.1155/2022/6243347
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal sentiment analysis aims to harvest people's opinions or attitudes from multimedia data through fusion techniques. However, existing fusion methods cannot take advantage of the correlation between multimodal data but introduce interference factors. In this paper, we propose an Interactive Transformer and Soft Mapping based method for multimodal sentiment analysis. In the Interactive Transformer layer, an Interactive Multihead Guided-Attention structure composed of a pair of Multihead Attention modules is first utilized to find the mapping relationship between multimodalities. Then, the obtained results are fed into a Feedforward Neural Network. The Soft Mapping layer consisting of stacking Soft Attention module is finally used to map the results to a higher dimension to realize the fusion of multimodal information. The proposed model can fully consider the relationship between multiple modal pieces of information and provides a new solution to the problem of data interaction in multimodal sentiment analysis. Our model was evaluated on benchmark datasets CMU-MOSEI and MELD, and the accuracy is improved by 5.57% compared with the baseline standard.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Hierarchical Interactive Multimodal Transformer for Aspect-Based Multimodal Sentiment Analysis
    Yu, Jianfei
    Chen, Kai
    Xia, Rui
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 1966 - 1978
  • [2] Multilayer interactive attention bottleneck transformer for aspect-based multimodal sentiment analysis
    Sun, Jiachang
    Zhu, Fuxian
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [3] Multimodal Phased Transformer for Sentiment Analysis
    Cheng, Junyan
    Fostiropoulos, Iordanis
    Boehm, Barry
    Soleymani, Mohammad
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2447 - 2458
  • [4] TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
    Huang, Jiehui
    Zhou, Jun
    Tang, Zhenchao
    Lin, Jiaying
    Chen, Calvin Yu-Chian
    KNOWLEDGE-BASED SYSTEMS, 2024, 285
  • [5] TensorFormer: A Tensor-Based Multimodal Transformer for Multimodal Sentiment Analysis and Depression Detection
    Sun, Hao
    Chen, Yen-Wei
    Lin, Lanfen
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2776 - 2786
  • [6] Multimodal transformer with adaptive modality weighting for multimodal sentiment analysis
    Wang, Yifeng
    He, Jiahao
    Wang, Di
    Wang, Quan
    Wan, Bo
    Luo, Xuemei
    NEUROCOMPUTING, 2024, 572
  • [7] Transformer-based adaptive contrastive learning for multimodal sentiment analysis
    Hu Y.
    Huang X.
    Wang X.
    Lin H.
    Zhang R.
    Multimedia Tools and Applications, 2025, 84 (3) : 1385 - 1402
  • [8] BiMSA: Multimodal Sentiment Analysis Based on BiGRU and Bidirectional Interactive Attention
    Wang, Qi
    Yu, Haizheng
    Wang, Yao
    Bian, Hong
    EUROPEAN JOURNAL ON ARTIFICIAL INTELLIGENCE, 2025,
  • [9] AcFormer: An Aligned and Compact Transformer for Multimodal Sentiment Analysis
    Zong, Daoming
    Ding, Chaoyue
    Li, Baoxiang
    Li, Jiakui
    Zheng, Ken
    Zhou, Qunyan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 833 - 842
  • [10] Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis
    Yuan, Ziqi
    Li, Wei
    Xu, Hua
    Yu, Wenmeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4400 - 4407