Adaptation of Transformer-Based Models for Depression Detection

被引:0
|
作者
Adebanji, Olaronke O. [1 ]
Ojo, Olumide E. [1 ]
Calvo, Hiram [1 ]
Gelbukh, Irina [1 ]
Sidorov, Grigori [1 ]
机构
[1] Inst Politecn Nacl, Ctr Invest Comp, Mexico City, Mexico
来源
COMPUTACION Y SISTEMAS | 2024年 / 28卷 / 01期
关键词
Depression; bag-of-words; word2vec; GloVe; machine learning; deep learning; transformers; sentiment analysis;
D O I
10.13053/CyS-28-1-4691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pre -trained language models are able to capture a broad range of knowledge and language patterns in text and can be fine-tuned for specific tasks. In this paper, we focus on evaluating the effectiveness of various traditional machine learning and pre -trained language models in identifying depression through the analysis of text from social media. We examined different feature representations with the traditional machine learning models and explored the impact of pre -training on the transformer models and compared their performance. Using BoW, Word2Vec, and GloVe representations, the machine learning models with which we experimented achieved impressive accuracies in the task of detecting depression. However, pre -trained language models exhibited outstanding performance, consistently achieving high accuracy, precision, recall, and F1 scores of approximately 0.98 or higher.
引用
收藏
页码:151 / 165
页数:15
相关论文
共 50 条
  • [41] Ouroboros: On Accelerating Training of Transformer-Based Language Models
    Yang, Qian
    Huo, Zhouyuan
    Wang, Wenlin
    Huang, Heng
    Carin, Lawrence
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [42] Semantics of Multiword Expressions in Transformer-Based Models: A Survey
    Miletic, Filip
    Walde, Sabine Schulte Im
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 593 - 612
  • [43] RadBERT: Adapting Transformer-based Language Models to Radiology
    Yan, An
    McAuley, Julian
    Lu, Xing
    Du, Jiang
    Chang, Eric Y.
    Gentili, Amilcare
    Hsu, Chun-Nan
    [J]. RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2022, 4 (04)
  • [44] Blockwise compression of transformer-based models without retraining
    Dong, Gaochen
    Chen, W.
    [J]. NEURAL NETWORKS, 2024, 171 : 423 - 428
  • [45] Transformer-Based Federated Learning Models for Recommendation Systems
    Reddy, M. Sujaykumar
    Karnati, Hemanth
    Sundari, L. Mohana
    [J]. IEEE ACCESS, 2024, 12 : 109596 - 109607
  • [46] A Comparison of Transformer-Based Language Models on NLP Benchmarks
    Greco, Candida Maria
    Tagarelli, Andrea
    Zumpano, Ester
    [J]. NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 490 - 501
  • [47] Strawberry disease identification with vision transformer-based models
    Nguyen, Hai Thanh
    Tran, Tri Dac
    Nguyen, Thanh Tuong
    Pham, Nhi Minh
    Nguyen Ly, Phuc Hoang
    Luong, Huong Hoang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (29) : 73101 - 73126
  • [48] Are transformer-based models more robust than CNN-based models?
    Liu, Zhendong
    Qian, Shuwei
    Xia, Changhong
    Wang, Chongjun
    [J]. NEURAL NETWORKS, 2024, 172
  • [49] Applications of transformer-based language models in bioinformatics: a survey
    Zhang, Shuang
    Fan, Rui
    Liu, Yuti
    Chen, Shuang
    Liu, Qiao
    Zeng, Wanwen
    [J]. NEURO-ONCOLOGY ADVANCES, 2023, 5 (01)
  • [50] TAG: Gradient Attack on Transformer-based Language Models
    Deng, Jieren
    Wang, Yijue
    Li, Ji
    Wang, Chenghong
    Shang, Chao
    Liu, Hang
    Rajasekaran, Sanguthevar
    Ding, Caiwen
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3600 - 3610