Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models

被引:10
|
作者
Alsentzer, Emily [1 ]
Rasmussen, Matthew J. [2 ]
Fontoura, Romy [2 ]
Cull, Alexis L. [2 ]
Beaulieu-Jones, Brett [3 ]
Gray, Kathryn J. [4 ,5 ]
Bates, David W. [1 ,6 ]
Kovacheva, Vesela P. [2 ]
机构
[1] Brigham & Womens Hosp, Div Gen Internal Med & Primary Care, Boston, MA USA
[2] Brigham & Womens Hosp, Dept Anesthesiol Perioperat & Pain Med, Boston, MA 02115 USA
[3] Univ Chicago, Dept Med, Sect Biomed Data Sci, Chicago, IL USA
[4] Massachusetts Gen Hosp, Ctr Genom Med, Boston, MA USA
[5] Brigham & Womens Hosp, Div Maternal Fetal Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Hlth Care Policy & Management, Boston, MA USA
关键词
CLASSIFICATION; ALGORITHMS;
D O I
10.1038/s41746-023-00957-x
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Many areas of medicine would benefit from deeper, more accurate phenotyping, but there are limited approaches for phenotyping using clinical notes without substantial annotated data. Large language models (LLMs) have demonstrated immense potential to adapt to novel tasks with no additional training by specifying task-specific instructions. Here we report the performance of a publicly available LLM, Flan-T5, in phenotyping patients with postpartum hemorrhage (PPH) using discharge notes from electronic health records (n = 271,081). The language model achieves strong performance in extracting 24 granular concepts associated with PPH. Identifying these granular concepts accurately allows the development of interpretable, complex phenotypes and subtypes. The Flan-T5 model achieves high fidelity in phenotyping PPH (positive predictive value of 0.95), identifying 47% more patients with this complication compared to the current standard of using claims codes. This LLM pipeline can be used reliably for subtyping PPH and outperforms a claims-based approach on the three most common PPH subtypes associated with uterine atony, abnormal placentation, and obstetric trauma. The advantage of this approach to subtyping is its interpretability, as each concept contributing to the subtype determination can be evaluated. Moreover, as definitions may change over time due to new guidelines, using granular concepts to create complex phenotypes enables prompt and efficient updating of the algorithm. Using this language modelling approach enables rapid phenotyping without the need for any manually annotated training data across multiple clinical use cases.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] ZVQAF: Zero-shot visual question answering with feedback from large language models
    Liu, Cheng
    Wang, Chao
    Peng, Yan
    Li, Zhixu
    NEUROCOMPUTING, 2024, 580
  • [22] A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
    Zhuang, Shengyao
    Zhuang, Honglei
    Koopman, Bevan
    Zuccon, Guido
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 38 - 47
  • [23] Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging
    Harrison, Rachel M.
    Dereventsov, Anton
    Bibin, Anton
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1535 - 1542
  • [24] Extensible Prompts for Language Models on Zero-shot Language Style Customization
    Ge, Tao
    Hu, Jing
    Dong, Li
    Mao, Shaoguang
    Xia, Yan
    Wang, Xun
    Chen, Si-Qing
    Wei, Furu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [25] A Zero-Shot Interpretable Framework for Sentiment Polarity Extraction
    Chaisen, Thanakorn
    Charoenkwan, Phasit
    Kim, Cheong Ghil
    Thiengburanathum, Pree
    IEEE ACCESS, 2024, 12 : 10586 - 10607
  • [26] Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models
    Pan, Junting
    Lin, Ziyi
    Ge, Yuying
    Zhu, Xiatian
    Zhang, Renrui
    Wang, Yi
    Qiao, Yu
    Li, Hongsheng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 272 - 283
  • [27] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
    Meng, Yu
    Huang, Jiaxin
    Zhang, Yu
    Han, Jiawei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [28] Zero-Shot Recommendation as Language Modeling
    Sileo, Damien
    Vossen, Wout
    Raymaekers, Robbe
    ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 223 - 230
  • [29] Towards Zero-shot Language Modeling
    Ponti, Edoardo M.
    Vulic, Ivan
    Cotterell, Ryan
    Reichart, Roi
    Korhonen, Anna
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 2900 - +
  • [30] Zero-Shot Translation of Attention Patterns in VQA Models to Natural Language
    Salewski, Leonard
    Koepke, A. Sophia
    Lensch, Hendrik P. A.
    Akata, Zeynep
    PATTERN RECOGNITION, DAGM GCPR 2023, 2024, 14264 : 378 - 393