Detection of suicidality from medical text using privacy-preserving large language models

被引:0
|
作者
Wiest, Isabella Catharina [1 ,2 ]
Verhees, Falk Gerrik [3 ]
Ferber, Dyke [1 ,4 ,5 ]
Zhu, Jiefu [1 ]
Bauer, Michael [3 ]
Lewitzka, Ute [3 ]
Pfennig, Andrea [3 ]
Mikolas, Pavol [3 ]
Kather, Jakob Nikolas [1 ,4 ,5 ,6 ]
机构
[1] Tech Univ Dresden, Else Kroener Fresenius Ctr Digital Hlth, Dresden, Germany
[2] Heidelberg Univ, Med Fac Mannheim, Dept Med 2, Mannheim, Germany
[3] Tech Univ Dresden, Carl Gustav Carus Univ Hosp, Dept Psychiat & Psychotherapy, Dresden, Germany
[4] Heidelberg Univ Hosp, Natl Ctr Tumor Dis NCT, Heidelberg, Germany
[5] Heidelberg Univ Hosp, Dept Med Oncol, Heidelberg, Germany
[6] Univ Hosp Dresden, Dept Med 1, Dresden, Germany
基金
欧洲研究理事会;
关键词
Large language models; natural language processing; suicidality; psychiatric disorder detection; electronic health records;
D O I
10.1192/bjp.2024.134
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background<br /> Attempts to use artificial intelligence (AI) in psychiatric disorders show moderate success, highlighting the potential of incorporating information from clinical assessments to improve the models. This study focuses on using large language models (LLMs) to detect suicide risk from medical text in psychiatric care. Aims To extract information about suicidality status from the admission notes in electronic health records (EHRs) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models. Method We compared the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from 100 psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity and F1 score across different prompting strategies. Results A German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83.0%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs. Conclusions The study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting information on suicidality from psychiatric records while preserving data privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research.
引用
收藏
页码:532 / 537
页数:6
相关论文
共 50 条
  • [1] Privacy-preserving large language models for structured medical information retrieval
    Wiest, Isabella Catharina
    Ferber, Dyke
    Zhu, Jiefu
    van Treeck, Marko
    Meyer, Sonja K.
    Juglan, Radhika
    Carrero, Zunamys I.
    Paech, Daniel
    Kleesiek, Jens
    Ebert, Matthias P.
    Truhn, Daniel
    Kather, Jakob Nikolas
    NPJ DIGITAL MEDICINE, 2024, 7 (01):
  • [2] Feasibility and Prospect of Privacy-preserving Large Language Models in Radiology
    Cai, Wenli
    RADIOLOGY, 2023, 309 (01)
  • [3] Selective privacy-preserving framework for large language models fine-tuning
    Wang, Teng
    Zhai, Lindong
    Yang, Tengfei
    Luo, Zhucheng
    Liu, Shuanggen
    INFORMATION SCIENCES, 2024, 678
  • [4] Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review
    Feretzakis, Georgios
    Papaspyridis, Konstantinos
    Gkoulalas-Divanis, Aris
    Verykios, Vassilios S.
    INFORMATION, 2024, 15 (11)
  • [5] InferDPT: Privacy-preserving Inference for Black-box Large Language Models
    Tong, Meng
    Chen, Kejiang
    Zhang, Jie
    Qi, Yuang
    Zhang, Weiming
    Yu, Nenghai
    Zhang, Tianwei
    Zhang, Zhikun
    arXiv, 2023,
  • [6] Enhancing Privacy While Preserving Context in Text Transformations by Large Language Models
    Zarski, Tymon Leslaw
    Janicki, Artur
    INFORMATION, 2025, 16 (01)
  • [7] Privacy-Preserving Learning of Prediagnosis Models from Distributed Medical Records
    Tang, Min
    Huang, Ying
    Deng, Guoqiang
    IEEE INTERNET COMPUTING, 2024, 28 (05) : 47 - 56
  • [8] Automated detection of in-hospital drug hypersensitivity reactions using a privacy-preserving large language model
    Dezoteux, Frederic
    Mille, Baptiste
    Shorten, Lucas
    Dehame, Lea
    Badet, Albane
    Staumont-Salle, Delphine
    Bene, Johana
    Rispal, Marie-Amelie
    Hamroun, Aghiles
    Le Guellec, Bastien
    JOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY AND VENEREOLOGY, 2025,
  • [9] Local large language models for privacy-preserving accelerated review of historic echocardiogram reports
    Vaid, Akhil
    Duong, Son Q.
    Lampert, Joshua
    Kovatch, Patricia
    Freeman, Robert
    Argulian, Edgar
    Croft, Lori
    Lerakis, Stamatios
    Goldman, Martin
    Khera, Rohan
    Nadkarni, Girish N.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024,
  • [10] Privacy-Preserving Text Mining as a Service
    Costantino, Gianpiero
    La Marra, Antonio
    Martinelli, Fabio
    Saracino, Andrea
    Sheikhalishahi, Mina
    2017 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (ISCC), 2017, : 890 - 897