The evolution of political memes: Detecting and characterizing internet memes with multi-modal deep learning

被引:57
|
作者
Beskow, David M. [1 ]
Kumar, Sumeet [1 ]
Carley, Kathleen M. [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
关键词
Deep learning; Multi-modal learning; Computer vision; Meme-detection; Meme;
D O I
10.1016/j.ipm.2019.102170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Combining humor with cultural relevance, Internet memes have become an ubiquitous artifact of the digital age. As Richard Dawkins described in his book The Selfish Gene, memes behave like cultural genes as they propagate and evolve through a complex process of 'mutation' and 'inheritance'. On the Internet, these memes activate inherent biases in a culture or society, sometimes replacing logical approaches to persuasive argument. Despite their fair share of success on the Internet, their detection and evolution have remained understudied. In this research, we propose and evaluate Meme-Hunter, a multi-modal deep learning model to classify images on the Internet as memes vs non-memes, and compare this to uni-modal approaches. We then use image similarity, meme specific optical character recognition, and face detection to find and study families of memes shared on Twitter in the 2018 US Mid-term elections. By mapping meme mutation in an electoral process, this study confirms Richard Dawkins' concept of meme evolution.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] A Massive Multi-Modal Perception Data Classification Method Using Deep Learning Based on Internet of Things
    Linli Jiang
    Chunmei Wu
    International Journal of Wireless Information Networks, 2020, 27 : 226 - 233
  • [42] A Novel Cross Modal Hashing Algorithm Based on Multi-modal Deep Learning
    Qu, Wen
    Wang, Daling
    Feng, Shi
    Zhang, Yifei
    Yu, Ge
    SOCIAL MEDIA PROCESSING, SMP 2015, 2015, 568 : 156 - 167
  • [43] A Massive Multi-Modal Perception Data Classification Method Using Deep Learning Based on Internet of Things
    Jiang, Linli
    Wu, Chunmei
    INTERNATIONAL JOURNAL OF WIRELESS INFORMATION NETWORKS, 2020, 27 (02) : 226 - 233
  • [44] Multi-modal deep convolutional dictionary learning for image denoising
    Sun, Zhonggui
    Zhang, Mingzhu
    Sun, Huichao
    Li, Jie
    Liu, Tingting
    Gao, Xinbo
    NEUROCOMPUTING, 2023, 562
  • [45] Multi-Modal Deep Learning for the Thickness Prediction of Blood Clot
    Moon, Jiseon
    Ahn, Sangil
    Joo, Min Gyu
    Baac, Hyoung Won
    Shin, Jitae
    2023 25TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, ICACT, 2023, : 341 - 344
  • [46] Effective deep learning-based multi-modal retrieval
    Wei Wang
    Xiaoyan Yang
    Beng Chin Ooi
    Dongxiang Zhang
    Yueting Zhuang
    The VLDB Journal, 2016, 25 : 79 - 101
  • [47] Deep Learning Based Multi-modal Registration for Retinal Imaging
    Arikan, Mustafa
    Sadeghipour, Amir
    Gerendas, Bianca
    Told, Reinhard
    Schmidt-Erfurt, Ursula
    INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 : 75 - 82
  • [48] A Deep Reinforcement Learning Recommendation Model with Multi-modal Features
    Pan H.
    Xie J.
    Gao J.
    Xu X.
    Wang C.
    Data Analysis and Knowledge Discovery, 2023, 7 (04) : 114 - 128
  • [49] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Andreea Glavan
    Estefanía Talavera
    Neural Computing and Applications, 2022, 34 : 6861 - 6877
  • [50] Effective deep learning-based multi-modal retrieval
    Wang, Wei
    Yang, Xiaoyan
    Ooi, Beng Chin
    Zhang, Dongxiang
    Zhuang, Yueting
    VLDB JOURNAL, 2016, 25 (01): : 79 - 101