Multi-modal Detection of Cyberbullying on Twitter

被引:4
|
作者
Qiu, Jiabao [1 ]
Moh, Melody [1 ]
Moh, Teng-Sheng [1 ]
机构
[1] San Jose State Univ, San Jose, CA 95192 USA
关键词
Machine Learning; Neural Networks; Natural Language Processing; Sentiment Analysis;
D O I
10.1145/3476883.3520222
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Cyberbullying detection is one of the trending topics of research in recent years, due to the popularity of social media and the lack of limitations about using electronic communications. Detection of cyberbullying may prevent some bullying behaviors online. This paper introduces a Multi-modal system that makes use of Convolutional Neural Network (CNN), Tensor Fusion Network, VGG-19 Network, and Multi-Layer Perceptron model, for the purpose of cyberbullying detection. This system can not only analyze the messages sent but also the extra information related to the messages (meta-information) and the images contained in the messages. The proposed system is trained and tested on Twitter datasets, achieving accuracy scores of 93%, which is 4% higher than scores of the benchmark text-only model using the same dataset and 6.6% higher than previous work. With the results, we believe that the proposed system performs well and it will provide new ideas for future works.
引用
收藏
页码:9 / 16
页数:8
相关论文
共 50 条
  • [31] Multi-Modal Face Presentation Attack Detection
    Institute of Automation, Chinese Academy of Sciences, Guodong, China
    不详
    不详
    不详
    不详
    Synth. Lect. Comput. Vis., 2020, 1 (1-88): : 1 - 88
  • [32] Multi-modal detection of holmium oxide nanoparticles
    Taleb, Jacqueline
    Brice, Mutelet
    Alice, Herland
    Celine, Mandon
    Olivier, Tillement
    Cedric, Louis
    Stephane, Roux
    Marc, Janier
    Pascal, Parriat
    Claire, Billotey
    BULLETIN DU CANCER, 2009, 96 : S19 - S20
  • [33] A survey on multi-modal social event detection
    Zhou, Han
    Yin, Hongpeng
    Zheng, Hengyi
    Li, Yanxia
    KNOWLEDGE-BASED SYSTEMS, 2020, 195
  • [34] A Multitask Framework for Sentiment, Emotion and Sarcasm aware Cyberbullying Detection from Multi-modal Code-Mixed Memes
    Maity, Krishanu
    Jha, Prince
    Saha, Sriparna
    Bhattacharyya, Pushpak
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1739 - 1749
  • [35] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [36] Towards Sentiment and Emotion aided Multi-modal Speech Act Classification in Twitter
    Saha, Tulika
    Upadhyaya, Apoorva
    Saha, Sriparna
    Bhattacharyya, Pushpak
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5727 - 5737
  • [37] Multi-Modal Sarcasm Detection with Interactive In-Modal and Cross-Modal Graphs
    Liang, Bin
    Lou, Chenwei
    Li, Xiang
    Gui, Lin
    Yang, Min
    Xu, Ruifeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4707 - 4715
  • [38] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [39] Deep Multi-modal Object Detection for Autonomous Driving
    Ennajar, Amal
    Khouja, Nadia
    Boutteau, Remi
    Tlili, Fethi
    2021 18TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS & DEVICES (SSD), 2021, : 7 - 11
  • [40] Multi-modal Misinformation Detection: Approaches, Challenges and Opportunities
    Abdali, Sara
    Shaham, Sina
    Krishnamachari, Bhaskar
    ACM Computing Surveys, 2024, 57 (03)