Predicting social media users' indirect aggression through pre-trained models

被引:0
|
作者
Zhou, Zhenkun [1 ]
Yu, Mengli [2 ,3 ,4 ]
Peng, Xingyu [5 ]
He, Yuxin [1 ]
机构
[1] Capital Univ Econ & Business, Sch Stat, Dept Data Sci, Beijing, Peoples R China
[2] Nankai Univ, Sch Journalism & Commun, Tianjin, Peoples R China
[3] Nankai Univ, Convergence Media Res Ctr, Tianjin, Peoples R China
[4] Nankai Univ, Publishing Res Inst, Tianjin, Peoples R China
[5] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Indirect aggression; Social media; Psychological traits; Pre-trained model; BERT; ERNIE; TRAITS;
D O I
10.7717/peerj-cs.2292
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Indirect aggression has become a prevalent phenomenon that erodes the social media environment. Due to the expense and the difficulty in determining objectively what constitutes indirect aggression, the traditional self-reporting questionnaire is hard to be employed in the current cyber area. In this study, we present a model for predicting indirect aggression online based on pre-trained models. Building on Weibo users' social media activities, we constructed basic, dynamic, and content features and classified indirect aggression into three subtypes: social exclusion, malicious humour, and guilt induction. We then built the prediction model by combining it with large-scale pre-trained models. The empirical evidence shows that this prediction model (ERNIE) outperforms the pre-trained models and predicts indirect aggression online much better than the models without extra pre-trained information. This study offers a practical model to predict users' indirect aggression. Furthermore, this work contributes to a better understanding of indirect aggression behaviors and can support social media platforms' organization and management.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Generalization of vision pre-trained models for histopathology
    Milad Sikaroudi
    Maryam Hosseini
    Ricardo Gonzalez
    Shahryar Rahnamayan
    H. R. Tizhoosh
    Scientific Reports, 13
  • [22] Knowledge Rumination for Pre-trained Language Models
    Yao, Yunzhi
    Wang, Peng
    Mao, Shengyu
    Tan, Chuanqi
    Huang, Fei
    Chen, Huajun
    Zhang, Ningyu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3387 - 3404
  • [23] Generalization of vision pre-trained models for histopathology
    Sikaroudi, Milad
    Hosseini, Maryam
    Gonzalez, Ricardo
    Rahnamayan, Shahryar
    Tizhoosh, H. R.
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [24] Deciphering Stereotypes in Pre-Trained Language Models
    Ma, Weicheng
    Scheible, Henry
    Wang, Brian
    Veeramachaneni, Goutham
    Chowdhary, Pratim
    Sung, Alan
    Koulogeorge, Andrew
    Wang, Lili
    Yang, Diyi
    Vosoughi, Soroush
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11328 - 11345
  • [25] PhoBERT: Pre-trained language models for Vietnamese
    Dat Quoc Nguyen
    Anh Tuan Nguyen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1037 - 1042
  • [26] Learning to Modulate pre-trained Models in RL
    Schmied, Thomas
    Hofmarcher, Markus
    Paischer, Fabian
    Pascanu, Razvan
    Hochreiter, Sepp
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [27] Deep Compression of Pre-trained Transformer Models
    Wang, Naigang
    Liu, Chi-Chun
    Venkataramani, Swagath
    Sen, Sanchari
    Chen, Chia-Yu
    El Maghraoui, Kaoutar
    Srinivasan, Vijayalakshmi
    Chang, Leland
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246
  • [29] Evaluating Commonsense in Pre-Trained Language Models
    Zhou, Xuhui
    Zhang, Yue
    Cui, Leyang
    Huang, Dandan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9733 - 9740
  • [30] Semantic Programming by Example with Pre-trained Models
    Verbruggen, Gust
    Le, Vu
    Gulwani, Sumit
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2021, 5 (OOPSLA):