BLR: A Multi-modal Sentiment Analysis Model

被引:0
|
作者
Yang Yang [1 ,2 ,3 ,4 ]
Ye Zhonglin [1 ,2 ,3 ,4 ]
Zhao Haixing [1 ,2 ,3 ,4 ]
Li Gege [1 ,2 ,3 ,4 ]
Cao Shujuan [1 ,2 ,3 ,4 ]
机构
[1] Qinghai Normal Univ, Coll Comp, Xining 810008, Peoples R China
[2] State Key Lab Tibetan Intelligent Informat Proc &, Xining 810008, Peoples R China
[3] Tibetan Informat Proc & Machine Translat Key Lab, Xining 810008, Peoples R China
[4] Minist Educ, Key Lab Tibetan Informat Proc, Xining 810008, Peoples R China
关键词
Transformer; Deep Learning; Multi-modal; Feature Fusion;
D O I
10.1007/978-3-031-44204-9_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In multi-modal sentiment analysis tasks, deep learning plays an important role due to its excellent performance. Compared with the traditional statistical approaches and machine learning approaches, deep learning methods have better performance and stability. However, there are still two problems in multi-modal sentiment analysis works, one is that the fused features lead to missing important information. Another is that the affiliation of each feature after fusion is not precisely defined and calculated. To deal with issues, we propose a BLR Multi-channel dual FusionModel based on Bert,Lstm and ResNeST framework. Our model first ensures that the important features will not be lost after fusion to the maximum extent and then will be processed and optimized according to the contribution of each feature for the fusion. Finally, we conduct experiments on two datasets, the results show that the accuracy of our model gets 76.125% and 77.5%, growing 3.025% and 2.875% over the best baseline model, respectively. Thus, the proposed model BLR model, achieves better effectiveness in multi-modal sentiment analysis tasks.
引用
收藏
页码:466 / 478
页数:13
相关论文
共 50 条
  • [21] Non-Uniform Attention Network for Multi-modal Sentiment Analysis
    Wang, Binqiang
    Dong, Gang
    Zhao, Yaqian
    Li, Rengang
    Cao, Qichun
    Chao, Yinyin
    [J]. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13141 LNCS : 612 - 623
  • [22] Non-Uniform Attention Network for Multi-modal Sentiment Analysis
    Wang, Binqiang
    Dong, Gang
    Zhao, Yaqian
    Li, Rengang
    Cao, Qichun
    Chao, Yinyin
    [J]. MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 612 - 623
  • [23] Effective Sentiment-relevant Word Selection for Multi-modal Sentiment Analysis in Spoken Language
    Zhang, Dong
    Li, Shoushan
    Zhu, Qiaoming
    Zhou, Guodong
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 148 - 156
  • [24] A Multi-modal Graphical Model for Scene Analysis
    Namin, Sarah Taghavi
    Najafi, Mohammad
    Salzmann, Mathieu
    Petersson, Lars
    [J]. 2015 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2015, : 1006 - 1013
  • [25] CLAP: Contrastive Language-Audio Pre-training Model for Multi-modal Sentiment Analysis
    Zhao, Tianqi
    Kong, Ming
    Liang, Tian
    Zhu, Qiang
    Kuang, Kun
    Wu, Fei
    [J]. PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 622 - 626
  • [26] Multi-Modal Sarcasm Detection with Sentiment Word Embedding
    Fu, Hao
    Liu, Hao
    Wang, Hongling
    Xu, Linyan
    Lin, Jiali
    Jiang, Dazhi
    [J]. ELECTRONICS, 2024, 13 (05)
  • [27] Improve the application of reinforcement learning and multi-modal information in music sentiment analysis
    Yang, Qi
    Liu, Songhu
    Gong, Tianzhuo
    [J]. EXPERT SYSTEMS, 2023,
  • [28] MOC: Multi-modal Sentiment Analysis via Optimal Transport and Contrastive Interactions
    Li, Yi
    Zhu, Qingmeng
    He, Hao
    Gu, Ziyin
    Zheng, Changwen
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 439 - 451
  • [29] Supervised Contrastive Learning for Robust and Efficient Multi-modal Emotion and Sentiment Analysis
    Gomaa, Ahmed
    Maier, Andreas
    Kosti, Ronak
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2423 - 2429
  • [30] Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5647 - 5657