A short video sentiment analysis model based on multimodal feature fusion

被引:0
|
作者
Shi, Hongyu [1 ]
机构
[1] Guangxi Technol Coll Machinery & Elect, Sch Cultural Tourism & Management, Nanning 530000, Peoples R China
来源
关键词
Emotional analysis; Feature fusion; Multi-head attention mechanism; Short videos; Text; Voice; EMOTION RECOGNITION; PREDICTION; LSTM;
D O I
10.1016/j.sasc.2024.200148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of the internet, the number of short video platform users has increased quickly. People's social entertainment mode has gradually changed from text to short video, generating many multimodal data. Therefore, traditional single-modal sentiment analysis can no longer fully adapt to multimodal data. To address this issue, this study proposes a short video sentiment analysis model based on multimodal feature fusion. This model analyzes the text, speech, and visual content in the video. Meanwhile, the information of the three modalities is integrated through a multi-head attention mechanism to analyze and classify emotions. The experimental results showed that when the training set size was 500, the recognition accuracy of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.96. The F1 score was 98, and the average absolute error value was 0.21. When the validation set size was 400, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 2.1 s. When the iterations were 60, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.9 s. The experimental results show that the proposed multimodal sentiment analysis model based on modal contribution recognition and multi-task learning has good model performance and can accurately identify emotions in short videos.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Sentiment analysis based on text information enhancement and multimodal feature fusion
    Liu, Zijun
    Cai, Li
    Yang, Wenjie
    Liu, Junhui
    [J]. PATTERN RECOGNITION, 2024, 156
  • [2] Multimodal Sentiment Analysis Method Based on Hierarchical Adaptive Feature Fusion Network
    Zhang, Huchao
    [J]. INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2024, 20 (01)
  • [3] Sentiment Analysis of Social Media via Multimodal Feature Fusion
    Zhang, Kang
    Geng, Yushui
    Zhao, Jing
    Liu, Jianxin
    Li, Wenxiao
    [J]. SYMMETRY-BASEL, 2020, 12 (12): : 1 - 14
  • [4] Quantum-inspired multimodal fusion for video sentiment analysis
    Li, Qiuchi
    Gkoumas, Dimitris
    Lioma, Christina
    Melucci, Massimo
    [J]. INFORMATION FUSION, 2021, 65 : 58 - 71
  • [5] AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
    Ji Mingyu
    Zhou Jiawei
    Wei Ning
    [J]. PLOS ONE, 2022, 17 (09):
  • [6] Deep Relationship Analysis in Video with Multimodal Feature Fusion
    Yu, Fan
    Wang, DanDan
    Zhang, Beibei
    Ren, Tongwei
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4640 - 4644
  • [7] Multimodal sentiment analysis based on fusion methods: A survey
    Zhu, Linan
    Zhu, Zhechao
    Zhang, Chenwei
    Xu, Yifei
    Kong, Xiangjie
    [J]. INFORMATION FUSION, 2023, 95 : 306 - 325
  • [8] Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion
    Lei, Yu
    Qu, Keshuai
    Zhao, Yifan
    Han, Qing
    Wang, Xuguang
    [J]. Lei, Yu (leiyu@stdu.edu.cn), 1600, Oxford University Press (67): : 2230 - 2245
  • [9] Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion
    Lei, Yu
    Qu, Keshuai
    Zhao, Yifan
    Han, Qing
    Wang, Xuguang
    [J]. COMPUTER JOURNAL, 2024, 67 (06): : 2230 - 2245
  • [10] News Short Video Classification Model Fusing Multimodal Feature
    Zeng, Xiangjiu
    Liu, Dawei
    Liu, Yifan
    Zhao, Zhibin
    Liu, Xiumei
    Ren, Yougui
    [J]. Computer Engineering and Applications, 2023, 59 (14) : 107 - 113