Calibration of Transformer-Based Models for Identifying Stress and Depression in Social Media

被引:12
|
作者
Ilias, Loukas [1 ]
Mouzakitis, Spiros [1 ]
Askounis, Dimitris [1 ]
机构
[1] Natl Tech Univ Athens, Decis Support Syst Lab, Schoolof Elect & Comp Engn, Athens 15780, Greece
关键词
~Calibration; depression; emotion; mental health; stress; transformers; EMOTION;
D O I
10.1109/TCSS.2023.3283009
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In today's fast-paced world, the rates of stress and depression present a surge. People use social media for expressing their thoughts and feelings through posts. Therefore, social media provide assistance for the early detection of mental health conditions. Existing methods mainly introduce feature extraction approaches and train shallow machine learning (ML) classifiers. For addressing the need of creating a large feature set and obtaining better performance, other research studies use deep neural networks or language models based on transformers. Despite the fact that transformer-based models achieve noticeable improvements, they cannot often capture rich factual knowledge. Although there have been proposed a number of studies aiming to enhance the pretrained transformer-based models with extra information or additional modalities, no prior work has exploited these modifications for detecting stress and depression through social media. In addition, although the reliability of a machine learning (ML) model's confidence in its predictions is critical for high-risk applications, there is no prior work taken into consideration the model calibration. To resolve the above issues, we present the first study in the task of depression and stress detection in social media, which injects extra-linguistic information in transformer-based models, namely, bidirectional encoder representations from transformers (BERT) and MentalBERT. Specifically, the proposed approach employs a multimodal adaptation gate for creating the combined embeddings, which are given as input to a BERT (or MentalBERT) model. For taking into account the model calibration, we apply label smoothing. We test our proposed approaches in three publicly available datasets and demonstrate that the integration of linguistic features into transformer-based models presents a surge in performance. Also, the usage of label smoothing contributes to both the improvement of the model's performance and the calibration of the model. We finally perform a linguistic analysis of the posts and show differences in language between stressful and nonstressful texts, as well as depressive and nondepressive posts.
引用
收藏
页码:1979 / 1990
页数:12
相关论文
共 50 条
  • [1] Depression detection in social media posts using transformer-based models and auxiliary features
    Kerasiotis, Marios
    Ilias, Loukas
    Askounis, Dimitris
    [J]. SOCIAL NETWORK ANALYSIS AND MINING, 2024, 14 (01)
  • [2] Identifying suicidal emotions on social media through transformer-based deep learning
    Dheeraj Kodati
    Ramakrishnudu Tene
    [J]. Applied Intelligence, 2023, 53 : 11885 - 11917
  • [3] Identifying suicidal emotions on social media through transformer-based deep learning
    Kodati, Dheeraj
    Tene, Ramakrishnudu
    [J]. APPLIED INTELLIGENCE, 2023, 53 (10) : 11885 - 11917
  • [4] Adaptation of Transformer-Based Models for Depression Detection
    Adebanji, Olaronke O.
    Ojo, Olumide E.
    Calvo, Hiram
    Gelbukh, Irina
    Sidorov, Grigori
    [J]. COMPUTACION Y SISTEMAS, 2024, 28 (01): : 151 - 165
  • [5] Transformer-based deep learning models for the sentiment analysis of social media data
    Kokab, Sayyida Tabinda
    Asghar, Sohail
    Naz, Shehneela
    [J]. ARRAY, 2022, 14
  • [6] Transformer-Based Extractive Social Media Question Answering on TweetQA
    Butt, Sabur
    Ashraf, Noman
    Fahim, Hammad
    Sidorov, Grigori
    Gelbukh, Alexander
    [J]. COMPUTACION Y SISTEMAS, 2021, 25 (01): : 23 - 32
  • [7] Transformer-based deep learning models for predicting permeability of porous media
    Meng, Yinquan
    Jiang, Jianguo
    Wu, Jichun
    Wang, Dong
    [J]. ADVANCES IN WATER RESOURCES, 2023, 179
  • [8] Attention Calibration for Transformer-based Sequential Recommendation
    Zhou, Peilin
    Ye, Qichen
    Xie, Yueqi
    Gao, Jingqi
    Wang, Shoujin
    Kim, Jae Boum
    You, Chenyu
    Kim, Sunghun
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3595 - 3605
  • [9] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    [J]. 2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93
  • [10] Integrating Social Media Insights for Innovation Performance Enhancement: A Transformer-Based Analysis
    Wang, Ang
    Niu, Yue
    [J]. JOURNAL OF THE KNOWLEDGE ECONOMY, 2024,