A review of multimodal-based emotion recognition techniques for cyberbullying detection in online social media platforms

被引:0
|
作者
Wang, Shuai [1 ,2 ]
Shibghatullah, Abdul Samad [1 ]
Iqbal, Thirupattur Javid [1 ]
Keoy, Kay Hooi [1 ]
机构
[1] Institute of Computer Science and Digital Innovation, UCSI University, Kuala Lumpur,56000, Malaysia
[2] Department of Physics and Electronic Engineering, Yuncheng University, Yuncheng,044000, China
关键词
Adversarial machine learning - Contrastive Learning - Deep learning - Federated learning - Physiological models - Speech recognition;
D O I
10.1007/s00521-024-10371-3
中图分类号
学科分类号
摘要
Cyberbullying is a serious issue in online social media platforms (OSMP), which requires effective detection and intervention systems. Multimodal emotion recognition (MER) technology can help prevent cyberbullying by analyzing emotions from textual messages, vision, facial expressions, tone of voice, and physiological signals. However, existing machine learning-based MER models have limitations in accuracy and generalization. Deep learning (DL) methods have achieved remarkable successes in various tasks and have been applied to learn high-level emotional features for MER. This paper provides a systematic review of the recent research on DL-based MER for cyberbullying detection (MERCD). We first introduce the concept of cyberbullying and the general framework of MERCD, as well as the commonly used multimodal emotion datasets. Then, we overview the principles and advancements of representative DL techniques. Next, we focus on the research progress of two key steps in MERCD: emotion feature extraction from speech, vision, and text modalities; and multimodal information fusion strategies. Finally, we discuss the challenges and opportunities in designing a cyberbullying prediction model and suggest possible directions in the MERCD area for future research. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:21923 / 21956
页数:33
相关论文
共 50 条