Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN

被引:37
|
作者
Huddar, Mahesh G. [1 ,3 ]
Sannakki, Sanjeev S. [2 ,3 ]
Rajpurohit, Vijay S. [2 ,3 ]
机构
[1] Hirasugar Inst Technol, Dept Comp Sci & Engn, Belagavi, India
[2] Gogte Inst Technol, Dept Comp Sci & Engn, Belagavi, India
[3] Visvesvaraya Technol Univ, Belagavi, India
关键词
Attention Model; Interlocutor State; Contextual Information; Emotion Detection; Multimodal Fusion; Sentiment Analysis; FEATURE-EXTRACTION; RECOGNITION; FUSION;
D O I
10.9781/ijimai.2020.07.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The availability of an enormous quantity of multimodal data and its widespread applications, automatic sentiment analysis and emotion classification in the conversation has become an interesting research topic among the research community. The interlocutor state, context state between the neighboring utterances and multimodal fusion play an important role in multimodal sentiment analysis and emotion detection in conversation. In this article, the recurrent neural network (RNN) based method is developed to capture the interlocutor state and contextual state between the utterances. The pair-wise attention mechanism is used to understand the relationship between the modalities and their importance before fusion. First, two-two combinations of modalities are fused at a time and finally, all the modalities are fused to form the trimodal representation feature vector. The experiments are conducted on three standard datasets such as IEMOCAP, CMU-MOSEI, and CMU-MOSI. The proposed model is evaluated using two metrics such as accuracy and F1-Score and the results demonstrate that the proposed model performs better than the standard baselines.
引用
收藏
页码:112 / 121
页数:10
相关论文
共 50 条
  • [1] Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation
    Shenoy, Aman
    Sardana, Ashish
    [J]. PROCEEDINGS OF THE SECOND GRAND CHALLENGE AND WORKSHOP ON MULTIMODAL LANGUAGE (CHALLENGE-HML), VOL 1, 2020, : 19 - 28
  • [2] Attention-based multi-modal fusion sarcasm detection
    Liu, Jing
    Tian, Shengwei
    Yu, Long
    Long, Jun
    Zhou, Tiejun
    Wang, Bo
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (02) : 2097 - 2108
  • [3] Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5647 - 5657
  • [4] AFLEMP: Attention-based Federated Learning for Emotion recognition using Multi-modal Physiological data
    Gahlan, Neha
    Sethia, Divyashikha
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [5] Multi-Modal Sentiment Analysis Based on Interactive Attention Mechanism
    Wu, Jun
    Zhu, Tianliang
    Zheng, Xinli
    Wang, Chunzhi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (16):
  • [6] Multi-modal Emotion Recognition with Temporal-Band Attention Based on LSTM-RNN
    Liu, Jiamin
    Su, Yuanqi
    Liu, Yuehu
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 : 194 - 204
  • [7] Multi-modal fusion attention sentiment analysis for mixed sentiment classification
    Xue, Zhuanglin
    Xu, Jiabin
    [J]. COGNITIVE COMPUTATION AND SYSTEMS, 2024,
  • [8] Contextual Inter-modal Attention for Multi-modal Sentiment Analysis
    Ghosal, Deepanway
    Akhtar, Md Shad
    Chauhan, Dushyant
    Poria, Soujanya
    Ekbalt, Asif
    Bhattacharyyat, Pushpak
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3454 - 3466
  • [9] Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis
    He, Chao
    Zhang, Xinghua
    Song, Dongqing
    Shen, Yingshan
    Mao, Chengjie
    Wen, Huosheng
    Zhu, Dingju
    Cai, Lihua
    [J]. BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (02)
  • [10] Transformer-Based Interactive Multi-Modal Attention Network for Video Sentiment Detection
    Zhuang, Xuqiang
    Liu, Fangai
    Hou, Jian
    Hao, Jianhua
    Cai, Xiaohong
    [J]. NEURAL PROCESSING LETTERS, 2022, 54 (03) : 1943 - 1960