Predicting social media users' indirect aggression through pre-trained models

被引:0
|
作者
Zhou, Zhenkun [1 ]
Yu, Mengli [2 ,3 ,4 ]
Peng, Xingyu [5 ]
He, Yuxin [1 ]
机构
[1] Capital Univ Econ & Business, Sch Stat, Dept Data Sci, Beijing, Peoples R China
[2] Nankai Univ, Sch Journalism & Commun, Tianjin, Peoples R China
[3] Nankai Univ, Convergence Media Res Ctr, Tianjin, Peoples R China
[4] Nankai Univ, Publishing Res Inst, Tianjin, Peoples R China
[5] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Indirect aggression; Social media; Psychological traits; Pre-trained model; BERT; ERNIE; TRAITS;
D O I
10.7717/peerj-cs.2292
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Indirect aggression has become a prevalent phenomenon that erodes the social media environment. Due to the expense and the difficulty in determining objectively what constitutes indirect aggression, the traditional self-reporting questionnaire is hard to be employed in the current cyber area. In this study, we present a model for predicting indirect aggression online based on pre-trained models. Building on Weibo users' social media activities, we constructed basic, dynamic, and content features and classified indirect aggression into three subtypes: social exclusion, malicious humour, and guilt induction. We then built the prediction model by combining it with large-scale pre-trained models. The empirical evidence shows that this prediction model (ERNIE) outperforms the pre-trained models and predicts indirect aggression online much better than the models without extra pre-trained information. This study offers a practical model to predict users' indirect aggression. Furthermore, this work contributes to a better understanding of indirect aggression behaviors and can support social media platforms' organization and management.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Evaluating the Effectiveness of Pre-trained Language Models in Predicting the Helpfulness of Online Product Reviews
    Boluki, Ali
    Sharami, Javad Pourmostafa Roshan
    Shterionov, Dimitar
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2023, 2024, 825 : 15 - 35
  • [42] Fusing Pre-trained Language Models with Multimodal Prompts through Reinforcement Learning
    Yu, Youngjae
    Chung, Jiwan
    Yun, Heeseung
    Hessel, Jack
    Park, Jae Sung
    Lu, Ximing
    Zellers, Rowan
    Ammanabrolu, Prithviraj
    Le Bras, Ronan
    Kim, Gunhee
    Choi, Yejin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10845 - 10856
  • [43] Clinical efficacy of pre-trained large language models through the lens of aphasia
    Cong, Yan
    Lacroix, Arianna N.
    Lee, Jiyeon
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [44] EnsUNet: Enhancing Brain Tumor Segmentation Through Fusion of Pre-trained Models
    Laouamer, Ilhem
    Aiadi, Oussama
    Kherfi, Mohammed Lamine
    Cheddad, Abbas
    Amirat, Hanane
    Laouamer, Lamri
    Drid, Khaoula
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, ICICT 2024, VOL 3, 2024, 1013 : 163 - 174
  • [45] Discrimination Bias Detection Through Categorical Association in Pre-Trained Language Models
    Dusi, Michele
    Arici, Nicola
    Gerevini, Alfonso Emilio
    Putelli, Luca
    Serina, Ivan
    IEEE ACCESS, 2024, 12 : 162651 - 162667
  • [46] Relational Prompt-Based Pre-Trained Language Models for Social Event Detection
    Li, Pu
    Yu, Xiaoyan
    Peng, Hao
    Xian, Yantuan
    Wang, Linqin
    Sun, Li
    Zhang, Jingyun
    Yu, Philip S.
    ACM Transactions on Information Systems, 2024, 43 (01)
  • [47] Pre-trained models for natural language processing: A survey
    Qiu XiPeng
    Sun TianXiang
    Xu YiGe
    Shao YunFan
    Dai Ning
    Huang XuanJing
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) : 1872 - 1897
  • [48] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [49] Probing Pre-Trained Language Models for Disease Knowledge
    Alghanmi, Israa
    Espinosa-Anke, Luis
    Schockaert, Steven
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
  • [50] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,