Multilingual and Multimodal Abuse Detection

被引:2
|
作者
Sharon, Rini [1 ]
Shah, Heet [1 ]
Mukherjee, Debdoot [1 ]
Gupta, Vikram [1 ]
机构
[1] ShareChat, New Delhi, India
来源
关键词
abusive speech detection; multimodal abuse detection; multilingual abuse detection;
D O I
10.21437/Interspeech.2022-10629
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The presence of abusive content on social media platforms is undesirable as it severely impedes healthy and safe social media interactions. While automatic abuse detection has been widely explored in textual domain, audio abuse detection still remains unexplored. In this paper, we attempt abuse detection in conversational audio from a multimodal perspective in a multilingual social media setting. Our key hypothesis is that along with the modelling of audio, incorporating discriminative information from other modalities can be beneficial for this task. Our proposed method, MADA, explicitly focuses on two modalities other than the audio itself, namely, the underlying emotions expressed in the abusive audio and the semantic information encapsulated in the corresponding text. Observations prove that MADA demonstrates gains over audio-only approaches on the ADIMA dataset. We test the proposed approach on 10 different languages and observe consistent gains in the range 0.6%-5.2% by leveraging multiple modalities. We also perform extensive ablation experiments for studying the contributions of every modality and observe the best results while leveraging all the modalities together. Additionally, we perform experiments to empirically confirm that there is a strong correlation between underlying emotions and abusive behaviour. Code is available at https://github.com/ShareChatAI/MADA
引用
收藏
页码:4631 / 4635
页数:5
相关论文
共 50 条
  • [21] Literacy unbound: multiliterate, multilingual, multimodal
    Huang, Qiaoya
    Chen, Liping
    INTERNATIONAL JOURNAL OF BILINGUAL EDUCATION AND BILINGUALISM, 2022, 25 (05) : 1947 - 1951
  • [22] Multilingual and Multimodal Hate Speech Analysis in Twitter
    Liz De la Pena Sarracen, Gretel
    WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 1109 - 1110
  • [23] Towards the Development of the Multilingual Multimodal Virtual Agent
    Vira, Inese
    Teselskis, Janis
    Skadina, Inguna
    ADVANCES IN NATURAL LANGUAGE PROCESSING, 2014, 8686 : 470 - 477
  • [24] The Multilingual Eyes Multimodal Traveler's App
    Villalobos, Wilbert
    Kumar, Yulia
    Li, J. Jenny
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 8, ICICT 2024, 2024, 1004 : 565 - 575
  • [25] Glossa: A multilingual, multimodal, configurable user interface
    Nygaard, Lars
    Priestley, Joel
    Noklestad, Anders
    Johannessen, Janne Bondi
    SIXTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, LREC 2008, 2008, : 617 - 621
  • [27] Composition 2.0: Toward a Multilingual and Multimodal Framework
    Fraiberg, Steven
    COLLEGE COMPOSITION AND COMMUNICATION, 2010, 62 (01) : 100 - 126
  • [28] Multilingual and multimodal composition at school: ScribJab in action
    Dagenais, Diane
    Toohey, Kelleen
    Fox, Alexa Bennett
    Singh, Angelpreet
    LANGUAGE AND EDUCATION, 2017, 31 (03) : 263 - 282
  • [29] CVCoders at SemEval-2024 Task 4: Unified Multimodal Modelling For Multilingual Propaganda Detection in Memes
    Bakhshande, Fatemezahra
    Naderi, Mahdieh
    Etemadi, Sauleh
    PROCEEDINGS OF THE 18TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, SEMEVAL-2024, 2024, : 1912 - 1918
  • [30] Multilingual novelty detection
    Tsai, Flora S.
    Zhang, Yi
    Kwee, Agus T.
    Tang, Wenyin
    EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (01) : 652 - 658