A short video sentiment analysis model based on multimodal feature fusion

被引:0
|
作者
Shi, Hongyu [1 ]
机构
[1] Guangxi Technol Coll Machinery & Elect, Sch Cultural Tourism & Management, Nanning 530000, Peoples R China
来源
关键词
Emotional analysis; Feature fusion; Multi-head attention mechanism; Short videos; Text; Voice; EMOTION RECOGNITION; PREDICTION; LSTM;
D O I
10.1016/j.sasc.2024.200148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of the internet, the number of short video platform users has increased quickly. People's social entertainment mode has gradually changed from text to short video, generating many multimodal data. Therefore, traditional single-modal sentiment analysis can no longer fully adapt to multimodal data. To address this issue, this study proposes a short video sentiment analysis model based on multimodal feature fusion. This model analyzes the text, speech, and visual content in the video. Meanwhile, the information of the three modalities is integrated through a multi-head attention mechanism to analyze and classify emotions. The experimental results showed that when the training set size was 500, the recognition accuracy of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.96. The F1 score was 98, and the average absolute error value was 0.21. When the validation set size was 400, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 2.1 s. When the iterations were 60, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.9 s. The experimental results show that the proposed multimodal sentiment analysis model based on modal contribution recognition and multi-task learning has good model performance and can accurately identify emotions in short videos.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Multimodal Sentiment Analysis Based on Attention Mechanism and Tensor Fusion Network
    Zhang, Kang
    Geng, Yushui
    Zhao, Jing
    Li, Wenxiao
    Liu, Jianxin
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1473 - 1477
  • [32] Research on Feature Extraction and Multimodal Fusion of Video Caption Based on Deep Learning
    Chen, Hongjun
    Li, Hengyi
    Wu, Xueqin
    2020 THE 4TH INTERNATIONAL CONFERENCE ON MANAGEMENT ENGINEERING, SOFTWARE ENGINEERING AND SERVICE SCIENCES (ICMSS 2020), 2020, : 73 - 76
  • [33] Video multimodal sentiment analysis using cross-modal feature translation and dynamical propagation
    Gan, Chenquan
    Tang, Yu
    Fu, Xiang
    Zhu, Qingyi
    Jain, Deepak Kumar
    Garcia, Salvador
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [34] TMFN: a text-based multimodal fusion network with multi-scale feature extraction and unsupervised contrastive learning for multimodal sentiment analysis
    Fu, Junsong
    Fu, Youjia
    Xue, Huixia
    Xu, Zihao
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)
  • [35] Multi-level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification
    Huddar, Mahesh G.
    Sannakki, Sanjeev S.
    Rajpurohit, Vijay S.
    COMPUTATIONAL INTELLIGENCE, 2020, 36 (02) : 861 - 881
  • [36] Feature Extraction Network with Attention Mechanism for Data Enhancement and Recombination Fusion for Multimodal Sentiment Analysis
    Qi, Qingfu
    Lin, Liyuan
    Zhang, Rui
    INFORMATION, 2021, 12 (09)
  • [37] Multimodal Feature Fusion Video Description Model Integrating Attention Mechanisms and Contrastive Learning
    Wang Zhihao
    Che Zhanbin
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (04) : 388 - 395
  • [38] AB-GRU: An attention-based bidirectional GRU model for multimodal sentiment fusion and analysis
    Wu, Jun
    Zheng, Xinli
    Wang, Jiangpeng
    Wu, Junwei
    Wang, Ji
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (10) : 18523 - 18544
  • [39] Novel OGBEE-based feature selection and feature-level fusion with MLP neural network for social media multimodal sentiment analysis
    S. Bairavel
    M. Krishnamurthy
    Soft Computing, 2020, 24 : 18431 - 18445
  • [40] Novel OGBEE-based feature selection and feature-level fusion with MLP neural network for social media multimodal sentiment analysis
    Bairavel, S.
    Krishnamurthy, M.
    SOFT COMPUTING, 2020, 24 (24) : 18431 - 18445