Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models

被引:10
|
作者
Alsentzer, Emily [1 ]
Rasmussen, Matthew J. [2 ]
Fontoura, Romy [2 ]
Cull, Alexis L. [2 ]
Beaulieu-Jones, Brett [3 ]
Gray, Kathryn J. [4 ,5 ]
Bates, David W. [1 ,6 ]
Kovacheva, Vesela P. [2 ]
机构
[1] Brigham & Womens Hosp, Div Gen Internal Med & Primary Care, Boston, MA USA
[2] Brigham & Womens Hosp, Dept Anesthesiol Perioperat & Pain Med, Boston, MA 02115 USA
[3] Univ Chicago, Dept Med, Sect Biomed Data Sci, Chicago, IL USA
[4] Massachusetts Gen Hosp, Ctr Genom Med, Boston, MA USA
[5] Brigham & Womens Hosp, Div Maternal Fetal Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Hlth Care Policy & Management, Boston, MA USA
关键词
CLASSIFICATION; ALGORITHMS;
D O I
10.1038/s41746-023-00957-x
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Many areas of medicine would benefit from deeper, more accurate phenotyping, but there are limited approaches for phenotyping using clinical notes without substantial annotated data. Large language models (LLMs) have demonstrated immense potential to adapt to novel tasks with no additional training by specifying task-specific instructions. Here we report the performance of a publicly available LLM, Flan-T5, in phenotyping patients with postpartum hemorrhage (PPH) using discharge notes from electronic health records (n = 271,081). The language model achieves strong performance in extracting 24 granular concepts associated with PPH. Identifying these granular concepts accurately allows the development of interpretable, complex phenotypes and subtypes. The Flan-T5 model achieves high fidelity in phenotyping PPH (positive predictive value of 0.95), identifying 47% more patients with this complication compared to the current standard of using claims codes. This LLM pipeline can be used reliably for subtyping PPH and outperforms a claims-based approach on the three most common PPH subtypes associated with uterine atony, abnormal placentation, and obstetric trauma. The advantage of this approach to subtyping is its interpretability, as each concept contributing to the subtype determination can be evaluated. Moreover, as definitions may change over time due to new guidelines, using granular concepts to create complex phenotypes enables prompt and efficient updating of the algorithm. Using this language modelling approach enables rapid phenotyping without the need for any manually annotated training data across multiple clinical use cases.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models
    Emily Alsentzer
    Matthew J. Rasmussen
    Romy Fontoura
    Alexis L. Cull
    Brett Beaulieu-Jones
    Kathryn J. Gray
    David W. Bates
    Vesela P. Kovacheva
    npj Digital Medicine, 6
  • [2] Large Language Models are Zero-Shot Reasoners
    Kojima, Takeshi
    Gu, Shixiang Shane
    Reid, Machel
    Matsuo, Yutaka
    Iwasawa, Yusuke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] Large Language Models as Zero-Shot Conversational Recommenders
    He, Zhankui
    Xie, Zhouhang
    Jha, Rahul
    Steck, Harald
    Liang, Dawen
    Feng, Yesu
    Majumder, Bodhisattwa Prasad
    Kallus, Nathan
    McAuley, Julian
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 720 - 730
  • [4] Zero-Shot Classification of Art with Large Language Models
    Tojima, Tatsuya
    Yoshida, Mitsuo
    IEEE Access, 2025, 13 : 17426 - 17439
  • [5] Large Language Models are Zero-Shot Rankers for Recommender Systems
    Hou, Yupeng
    Zhang, Junjie
    Lin, Zihan
    Lu, Hongyu
    Xie, Ruobing
    McAuley, Julian
    Zhao, Wayne Xin
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT II, 2024, 14609 : 364 - 381
  • [6] Large Language Models Are Zero-Shot Time Series Forecasters
    Gruver, Nate
    Finzi, Marc
    Qiu, Shikai
    Wilson, Andrew Gordon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Examining Zero-Shot Vulnerability Repair with Large Language Models
    Pearce, Hammond
    Tan, Benjamin
    Ahmad, Baleegh
    Karri, Ramesh
    Dolan-Gavitt, Brendan
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 2339 - 2356
  • [8] Examining Zero-Shot Vulnerability Repair with Large Language Models
    Pearce, Hammond
    Tan, Benjamin
    Ahmad, Baleegh
    Karri, Ramesh
    Dolan-Gavitt, Brendan
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 2339 - 2356
  • [9] Zero-shot Bilingual App Reviews Mining with Large Language Models
    Wei, Jialiang
    Courbis, Anne-Lise
    Lambolais, Thomas
    Xu, Binbin
    Bernard, Pierre Louis
    Dray, Gerard
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 898 - 904
  • [10] Language Models as Zero-Shot Trajectory Generators
    Kwon, Teyun
    Di Palo, Norman
    Johns, Edward
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (07): : 6728 - 6735