Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation

被引:0
|
作者
Shenoy, Aman [1 ]
Sardana, Ashish [2 ]
机构
[1] Birla Inst Technol & Sci, Pilani, RA, India
[2] NVIDIA Graph, Bengaluru, KA, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sentiment Analysis and Emotion Detection in conversation is key in several real-world applications, with an increase in modalities available aiding a better understanding of the underlying emotions. Multi-modal Emotion Detection and Sentiment Analysis can be particularly useful, as applications will be able to use specific subsets of available modalities, as per the available data. Current systems dealing with Multi-modal functionality fail to leverage and capture - the context of the conversation through all modalities, the dependency between the listener(s) and speaker emotional states, and the relevance and relationship between the available modalities. In this paper, we propose an end to end RNN architecture that attempts to take into account all the mentioned drawbacks. Our proposed model, at the time of writing, out-performs the state of the art on a benchmark dataset on a variety of accuracy and regression metrics.
引用
收藏
页码:19 / 28
页数:10
相关论文
共 50 条
  • [31] Structure Aware Multi-Graph Network for Multi-Modal Emotion Recognition in Conversations
    Zhang, Duzhen
    Chen, Feilong
    Chang, Jianlong
    Chen, Xiuyi
    Tian, Qi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3987 - 3997
  • [32] XBully: Cyberbullying Detection within a Multi-Modal Context
    Cheng, Lu
    Li, Jundong
    Silva, Yasin N.
    Hall, Deborah L.
    Liu, Huan
    [J]. PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, : 339 - 347
  • [33] CHILD EMOTION REGULATION AND SOCIAL CONTEXT: A MULTI-MODAL PHYSIOLOGY STUDY
    Babkirk, Sarah
    Quintero, Jean
    Birk, Samantha
    Schwartz, Joshua
    Dennis-Tiwary, Tracy
    [J]. PSYCHOPHYSIOLOGY, 2016, 53 : S31 - S31
  • [34] Multi-modal Emotion Recognition with Temporal-Band Attention Based on LSTM-RNN
    Liu, Jiamin
    Su, Yuanqi
    Liu, Yuehu
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 : 194 - 204
  • [35] Multi-Modal Context-Aware reasoNer (CAN) at the Edge of IoT
    Rahman, Hasibur
    Rahmani, Rahim
    Kanter, Theo
    [J]. 8TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2017) AND THE 7TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT 2017), 2017, 109 : 335 - 342
  • [36] Adaptive Context-Aware Multi-Modal Network for Depth Completion
    Zhao, Shanshan
    Gong, Mingming
    Fu, Huan
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5264 - 5276
  • [37] SCATEAgent: Context-aware software agents for multi-modal travel
    Yin, M
    Griss, M
    [J]. APPLICATIONS OF AGENT TECHNOLOGY IN TRAFFIC AND TRANSPORTATION, 2005, : 69 - 84
  • [38] Experiments with multi-modal interfaces in a context-aware city guide
    Bornträger, C
    Cheverst, K
    Davies, N
    Dix, A
    Friday, A
    Seitz, J
    [J]. HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES, 2003, 2795 : 116 - 130
  • [39] Sequential Late Fusion Technique for Multi-modal Sentiment Analysis
    Banerjee, Debapriya
    Lygerakis, Fotios
    Makedon, Fillia
    [J]. THE 14TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2021, 2021, : 264 - 265
  • [40] Multi-Modal Sentiment Analysis Based on Interactive Attention Mechanism
    Wu, Jun
    Zhu, Tianliang
    Zheng, Xinli
    Wang, Chunzhi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (16):