Calibration of Transformer-Based Models for Identifying Stress and Depression in Social Media

被引:12
|
作者
Ilias, Loukas [1 ]
Mouzakitis, Spiros [1 ]
Askounis, Dimitris [1 ]
机构
[1] Natl Tech Univ Athens, Decis Support Syst Lab, Schoolof Elect & Comp Engn, Athens 15780, Greece
关键词
~Calibration; depression; emotion; mental health; stress; transformers; EMOTION;
D O I
10.1109/TCSS.2023.3283009
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In today's fast-paced world, the rates of stress and depression present a surge. People use social media for expressing their thoughts and feelings through posts. Therefore, social media provide assistance for the early detection of mental health conditions. Existing methods mainly introduce feature extraction approaches and train shallow machine learning (ML) classifiers. For addressing the need of creating a large feature set and obtaining better performance, other research studies use deep neural networks or language models based on transformers. Despite the fact that transformer-based models achieve noticeable improvements, they cannot often capture rich factual knowledge. Although there have been proposed a number of studies aiming to enhance the pretrained transformer-based models with extra information or additional modalities, no prior work has exploited these modifications for detecting stress and depression through social media. In addition, although the reliability of a machine learning (ML) model's confidence in its predictions is critical for high-risk applications, there is no prior work taken into consideration the model calibration. To resolve the above issues, we present the first study in the task of depression and stress detection in social media, which injects extra-linguistic information in transformer-based models, namely, bidirectional encoder representations from transformers (BERT) and MentalBERT. Specifically, the proposed approach employs a multimodal adaptation gate for creating the combined embeddings, which are given as input to a BERT (or MentalBERT) model. For taking into account the model calibration, we apply label smoothing. We test our proposed approaches in three publicly available datasets and demonstrate that the integration of linguistic features into transformer-based models presents a surge in performance. Also, the usage of label smoothing contributes to both the improvement of the model's performance and the calibration of the model. We finally perform a linguistic analysis of the posts and show differences in language between stressful and nonstressful texts, as well as depressive and nondepressive posts.
引用
收藏
页码:1979 / 1990
页数:12
相关论文
共 50 条
  • [41] Transformer-based language models for mental health issues: A survey
    Greco, Candida M.
    Simeri, Andrea
    Tagarelli, Andrea
    Zumpano, Ester
    [J]. PATTERN RECOGNITION LETTERS, 2023, 167 : 204 - 211
  • [42] Transformer-Based Models for the Automatic Indexing of Scientific Documents in French
    Angel Gonzalez, Jose
    Buscaldi, Davide
    Sanchis, Emilio
    Hurtado, Lluis-F
    [J]. NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 60 - 72
  • [43] Transformer-Based Composite Language Models for Text Evaluation and Classification
    Skoric, Mihailo
    Utvic, Milos
    Stankovic, Ranka
    [J]. MATHEMATICS, 2023, 11 (22)
  • [44] CardioBERTpt: Transformer-based Models for Cardiology Language Representation in Portuguese
    Rubel Schneider, Elisa Terumi
    Gumiel, Yohan Bonescki
    Andrioli de Souza, Joao Vitor
    Mukai, Lilian Mie
    Silva e Oliveira, Lucas Emanuel
    Rebelo, Marina de Sa
    Gutierrez, Marco Antonio
    Krieger, Jose Eduardo
    Teodoro, Douglas
    Moro, Claudia
    Paraiso, Emerson Cabrera
    [J]. 2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS, 2023, : 378 - 381
  • [45] Automatic text summarization using transformer-based language models
    Rao, Ritika
    Sharma, Sourabh
    Malik, Nitin
    [J]. INTERNATIONAL JOURNAL OF SYSTEM ASSURANCE ENGINEERING AND MANAGEMENT, 2024, 15 (06) : 2599 - 2605
  • [46] The Generalization and Robustness of Transformer-Based Language Models on Commonsense Reasoning
    Shen, Ke
    [J]. THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23419 - 23420
  • [47] Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
    Perez-Mayos, Laura
    Taboas Garcia, Alba
    Mille, Simon
    Wanner, Leo
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3799 - 3812
  • [48] Pre-trained transformer-based language models for Sundanese
    Wilson Wongso
    Henry Lucky
    Derwin Suhartono
    [J]. Journal of Big Data, 9
  • [49] Explaining transformer-based models for automatic short answer grading
    Poulton, Andrew
    Eliens, Sebas
    [J]. 5TH INTERNATIONAL CONFERENCE ON DIGITAL TECHNOLOGY IN EDUCATION, ICDTE 2021, 2021, : 110 - 116
  • [50] Reward modeling for mitigating toxicity in transformer-based language models
    Farshid Faal
    Ketra Schmitt
    Jia Yuan Yu
    [J]. Applied Intelligence, 2023, 53 : 8421 - 8435