MemoSen: A Multimodal Dataset for Sentiment Analysis of Memes

被引:0
|
作者
Hossain, Eftekhar [1 ]
Sharif, Omar [2 ]
Hoque, Mohammed Moshiul [2 ]
机构
[1] Chittagong Univ Engn & Technol, Dept Elect & Telecommun Engn, Chattogram 4349, Bangladesh
[2] Chittagong Univ Engn & Technol, Dept Comp Sci & Engn, Chattogram 4349, Bangladesh
关键词
Sentiment analysis; Multimodal fusion; Memes; Code-mixing; Low resource languages;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Posting and sharing memes have become a powerful expedient of expressing opinions on social media in recent days. Analysis of sentiment from memes has gained much attention to researchers due to its substantial implications in various domains like finance and politics. Past studies on sentiment analysis of memes have primarily been conducted in English, where low-resource languages gain little or no attention. However, due to the proliferation of social media usage in recent years, sentiment analysis of memes has become a crucial research issue in low resource languages. The scarcity of benchmark dataset is a significant barrier in performing multimodal sentiment analysis research in resource-constrained languages like Bengali. This paper presents a novel multimodal dataset (named MemoSen) for Bengali containing 4368 memes with three annotated sentiment labels positive, negative, and neutral. A detailed annotation guideline is provided to facilitate further resource development in this domain. Additionally, a set of experiments are carried out on MemoSen by constructing twelve unimodal (i.e., visual, textual) and ten multimodal (image+text) models. The evaluation exhibits that the integration of multimodal information significantly improves (about 1.2%) the meme sentiment classification compared to the unimodal counterparts and thus elucidate the novel aspects of multimodality.
引用
收藏
页码:1542 / 1554
页数:13
相关论文
共 50 条
  • [21] Multimodal Phased Transformer for Sentiment Analysis
    Cheng, Junyan
    Fostiropoulos, Iordanis
    Boehm, Barry
    Soleymani, Mohammad
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2447 - 2458
  • [22] Internet memes as multimodal constructions
    Dancygier, Barbara
    Vandelanotte, Lieven
    COGNITIVE LINGUISTICS, 2017, 28 (03) : 565 - 598
  • [23] A Optimized BERT for Multimodal Sentiment Analysis
    Wu, Jun
    Zhu, Tianliang
    Zhu, Jiahui
    Li, Tianyi
    Wang, Chunzhi
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [24] Sentiment analysis of multimodal twitter data
    Kumar, Akshi
    Garg, Geetanjali
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) : 24103 - 24119
  • [25] DEMUSA: DEMO FOR MULTIMODAL SENTIMENT ANALYSIS
    Hong, Soyeon
    Kim, Jeonghoon
    Lee, Donghoon
    Cho, Hyunsouk
    2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [26] A Multimodal Approach to Image Sentiment Analysis
    Gaspar, Antonio
    Alexandre, Luis A.
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2019, PT I, 2019, 11871 : 302 - 309
  • [27] Sentiment analysis of multimodal twitter data
    Akshi Kumar
    Geetanjali Garg
    Multimedia Tools and Applications, 2019, 78 : 24103 - 24119
  • [28] Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space
    Xie Z.
    Yang Y.
    Wang J.
    Liu X.
    Li X.
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (08) : 1 - 1
  • [29] The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
    Kiela, Douwe
    Firooz, Hamed
    Mohan, Aravind
    Goswami, Vedanuj
    Singh, Amanpreet
    Ringshia, Pratik
    Testuggine, Davide
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] FMSA-SC: A Fine-Grained Multimodal Sentiment Analysis Dataset Based on Stock Comment Videos
    Song, Lingyun
    Chen, Siyu
    Meng, Ziyang
    Sun, Mingxuan
    Shang, Xuequn
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7294 - 7306