Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis

被引:0
|
作者
Akhtar, Md Shad [1 ]
Chauhan, Dushyant Singh [1 ]
Ghosal, Deepanway [1 ]
Poria, Soujanya [2 ]
Ekbal, Asif [1 ]
Bhattacharyya, Pushpak [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna, India
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Related tasks often have inter-dependence on each other and perform better when solved in a joint framework. In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both. The multi-modal inputs (i.e., text, acoustic and visual frames) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making. We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance. We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis. Evaluation results suggest that multi-task learning framework offers improvement over the single-task framework. The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [1] Sentiment and Emotion help Sarcasm? A Multi-task Learning Framework for Multi-Modal Sarcasm, Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Dhanush, S. R.
    Ekbal, Asif
    Bhattacharyya, Pushpak
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4351 - 4360
  • [2] Multi-modal embeddings using multi-task learning for emotion recognition
    Khare, Aparna
    Parthasarathy, Srinivas
    Sundaram, Shiva
    [J]. INTERSPEECH 2020, 2020, : 384 - 388
  • [3] Multi-modal Sentiment and Emotion Joint Analysis with a Deep Attentive Multi-task Learning Model
    Zhang, Yazhou
    Rong, Lu
    Li, Xiang
    Chen, Rui
    [J]. ADVANCES IN INFORMATION RETRIEVAL, PT I, 2022, 13185 : 518 - 532
  • [4] Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition
    Fan, Dinghao
    Lu, Hengjie
    Xu, Shugong
    Cao, Shan
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (23) : 27026 - 27036
  • [5] MULTI-MODAL MULTI-TASK DEEP LEARNING FOR SPEAKER AND EMOTION RECOGNITION OF TV-SERIES DATA
    Novitasari, Sashi
    Quoc Truong Do
    Sakti, Sakriani
    Lestari, Dessi
    Nakamura, Satoshi
    [J]. 2018 ORIENTAL COCOSDA - INTERNATIONAL CONFERENCE ON SPEECH DATABASE AND ASSESSMENTS, 2018, : 37 - 42
  • [6] Multi-task & Multi-modal Sentiment Analysis Model Based on Aware Fusion
    Wu, Sisi
    Ma, Jing
    [J]. Data Analysis and Knowledge Discovery, 2023, 7 (10): : 74 - 84
  • [7] M3GAT: A Multi-modal, Multi-task Interactive Graph Attention Network for Conversational Sentiment Analysis and Emotion Recognition
    Zhang, Yazhou
    Jia, Ao
    Wang, Bo
    Zhang, Peng
    Zhao, Dongming
    Li, Pu
    Hou, Yuexian
    Jin, Xiaojia
    Song, Dawei
    Qin, Jing
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
  • [8] Multimodal Sentiment Recognition With Multi-Task Learning
    Zhang, Sun
    Yin, Chunyong
    Yin, Zhichao
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01): : 200 - 209
  • [9] Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
    Kumar, Abhishek
    Ekbal, Asif
    Kawahra, Daisuke
    Kurohashi, Sadao
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [10] Multi-modal microblog classification via multi-task learning
    Sicheng Zhao
    Hongxun Yao
    Sendong Zhao
    Xuesong Jiang
    Xiaolei Jiang
    [J]. Multimedia Tools and Applications, 2016, 75 : 8921 - 8938