Emotion-Aware Multimodal Fusion for Meme Emotion Detection

被引:0
|
作者
Sharma, Shivam [1 ,2 ]
Ramaneswaran, S. [3 ]
Akhtar, Md. Shad [4 ]
Chakraborty, Tanmoy [1 ]
机构
[1] IIT Delhi, New Delhi 110016, India
[2] Wipro AI Labs, Bengaluru 560100, Karnataka, India
[3] VIT, Vellore 632014, Tamil Nadu, India
[4] IIIT Delhi, New Delhi 110020, India
关键词
Emotion analysis; information fusion; memes; multimodality; social media;
D O I
10.1109/TAFFC.2024.3378698
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ever-evolving social media discourse has witnessed an overwhelming use of memes to express opinions or dissent. Besides being misused for spreading malcontent, they are mined by corporations and political parties to glean the public's opinion. Therefore, memes predominantly offer affect-enriched insights towards ascertaining the societal psyche. However, the current approaches are yet to model the affective dimensions expressed in memes effectively. They rely extensively on large multimodal datasets for pre-training and do not generalize well due to constrained visual-linguistic grounding. In this paper, we introduce MOOD (Meme emOtiOns Dataset), which embodies six basic emotions. We then present ALFRED (emotion-Aware muLtimodal Fusion foR Emotion Detection), a novel multimodal neural framework that (i) explicitly models emotion-enriched visual cues, and (ii) employs an efficient cross-modal fusion via a gating mechanism. Our investigation establishes ALFRED's superiority over existing baselines by 4.94% F1. Additionally, ALFRED competes strongly with previous best approaches on the challenging Memotion task. We then discuss ALFRED's domain-agnostic generalizability by demonstrating its dominance on two recently-released datasets - HarMeme and Dank Memes, over other baselines. Further, we analyze ALFRED's interpretability using attention maps. Finally, we highlight the inherent challenges posed by the complex interplay of disparate modality-specific cues toward meme analysis.
引用
收藏
页码:1800 / 1811
页数:12
相关论文
共 50 条
  • [1] An Emotion-Aware Approach for Fake News Detection
    Liu, Fei
    Zhang, Xinsheng
    Liu, Qi
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (03) : 3516 - 3524
  • [2] MahaEmoSen: Towards Emotion-aware Multimodal Marathi Sentiment Analysis
    Chaudhari, Prasad
    Nandeshwar, Pankaj
    Bansal, Shubhi
    Kumar, Nagendra
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (09)
  • [3] Ubiquitous emotion-aware computing
    van den Broek, Egon L.
    [J]. PERSONAL AND UBIQUITOUS COMPUTING, 2013, 17 (01) : 53 - 67
  • [4] Emotion-Aware Music Recommendation
    Yang, Jinhyeok
    Chae, WooJoung
    Kim, SunYeob
    Choi, Hyebong
    [J]. Design, User Experience, and Usability: Novel User Experiences, Pt II, 2016, 9747 : 110 - 121
  • [5] Emotion-Aware Natural Interaction
    Karpouzis, Kostas
    Andre, Elisabeth
    Batliner, Anton
    [J]. ADVANCES IN HUMAN-COMPUTER INTERACTION, 2010, 2010
  • [6] Ubiquitous emotion-aware computing
    Egon L. van den Broek
    [J]. Personal and Ubiquitous Computing, 2013, 17 : 53 - 67
  • [7] Emotion-aware hierarchical interaction network for multimodal image aesthetics assessment
    Zhu, Tong
    Li, Leida
    Chen, Pengfei
    Wu, Jinjian
    Yang, Yuzhe
    Li, Yaqian
    [J]. Pattern Recognition, 2024, 154
  • [8] EMMA: An Emotion-Aware Wellbeing Chatbot
    Ghandeharioun, Asma
    McDuff, Daniel
    Czerwinski, Mary
    Rowan, Kael
    [J]. 2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2019,
  • [9] Considerations for emotion-aware consumer products
    van den Broek, Egon L.
    Westerink, Joyce H. D. M.
    [J]. APPLIED ERGONOMICS, 2009, 40 (06) : 1055 - 1064
  • [10] Emotion-aware Computing using Smartphone
    Ghosh, Surjya
    [J]. 2017 9TH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORKS (COMSNETS), 2017, : 592 - 593