Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models

被引:10
|
作者
Alsentzer, Emily [1 ]
Rasmussen, Matthew J. [2 ]
Fontoura, Romy [2 ]
Cull, Alexis L. [2 ]
Beaulieu-Jones, Brett [3 ]
Gray, Kathryn J. [4 ,5 ]
Bates, David W. [1 ,6 ]
Kovacheva, Vesela P. [2 ]
机构
[1] Brigham & Womens Hosp, Div Gen Internal Med & Primary Care, Boston, MA USA
[2] Brigham & Womens Hosp, Dept Anesthesiol Perioperat & Pain Med, Boston, MA 02115 USA
[3] Univ Chicago, Dept Med, Sect Biomed Data Sci, Chicago, IL USA
[4] Massachusetts Gen Hosp, Ctr Genom Med, Boston, MA USA
[5] Brigham & Womens Hosp, Div Maternal Fetal Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Hlth Care Policy & Management, Boston, MA USA
关键词
CLASSIFICATION; ALGORITHMS;
D O I
10.1038/s41746-023-00957-x
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Many areas of medicine would benefit from deeper, more accurate phenotyping, but there are limited approaches for phenotyping using clinical notes without substantial annotated data. Large language models (LLMs) have demonstrated immense potential to adapt to novel tasks with no additional training by specifying task-specific instructions. Here we report the performance of a publicly available LLM, Flan-T5, in phenotyping patients with postpartum hemorrhage (PPH) using discharge notes from electronic health records (n = 271,081). The language model achieves strong performance in extracting 24 granular concepts associated with PPH. Identifying these granular concepts accurately allows the development of interpretable, complex phenotypes and subtypes. The Flan-T5 model achieves high fidelity in phenotyping PPH (positive predictive value of 0.95), identifying 47% more patients with this complication compared to the current standard of using claims codes. This LLM pipeline can be used reliably for subtyping PPH and outperforms a claims-based approach on the three most common PPH subtypes associated with uterine atony, abnormal placentation, and obstetric trauma. The advantage of this approach to subtyping is its interpretability, as each concept contributing to the subtype determination can be evaluated. Moreover, as definitions may change over time due to new guidelines, using granular concepts to create complex phenotypes enables prompt and efficient updating of the algorithm. Using this language modelling approach enables rapid phenotyping without the need for any manually annotated training data across multiple clinical use cases.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Zero-shot domain paraphrase with unaligned pre-trained language models
    Chen, Zheng
    Yuan, Hu
    Ren, Jiankun
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 1097 - 1110
  • [42] Vision-Language Models for Zero-Shot Classification of Remote Sensing Images
    Al Rahhal, Mohamad Mahmoud
    Bazi, Yakoub
    Elgibreen, Hebah
    Zuair, Mansour
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [43] Pre-trained Language Models Can be Fully Zero-Shot Learners
    Zhao, Xuandong
    Ouyang, Siqi
    Yu, Zhiguo
    Wu, Ming
    Li, Lei
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15590 - 15606
  • [44] Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
    Huang, Wenlong
    Abbeel, Pieter
    Pathak, Deepak
    Mordatch, Igor
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [45] Zero-shot domain paraphrase with unaligned pre-trained language models
    Zheng Chen
    Hu Yuan
    Jiankun Ren
    Complex & Intelligent Systems, 2023, 9 : 1097 - 1110
  • [46] Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
    Wang, Lei
    Xu, Wanyu
    Lan, Yihuai
    Hu, Zhiqiang
    Lan, Yunshi
    Lee, Roy Ka-Wei
    Lim, Ee-Peng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2609 - 2634
  • [47] Effectiveness of large language models in automated evaluation of argumentative essays: finetuning vs. zero-shot prompting
    Wang, Qiao
    Gayed, John Maurice
    COMPUTER ASSISTED LANGUAGE LEARNING, 2024,
  • [48] Hybrid Emoji-Based Masked Language Models for Zero-Shot Abusive Language Detection
    Corazza, Michele
    Menini, Stefano
    Cabrio, Elena
    Tonelli, Sara
    Villata, Serena
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 943 - 949
  • [49] Translating Words to Worlds: Zero-Shot Synthesis of 3D Terrain from Textual Descriptions Using Large Language Models
    Zhang, Guangzi
    Chen, Lizhe
    Zhang, Yu
    Liu, Yan
    Ge, Yuyao
    Cai, Xingquan
    APPLIED SCIENCES-BASEL, 2024, 14 (08):
  • [50] Zero-shot Object Detection for Infrared Images Using Pre-trained Vision and Language Models
    Miwa, Shotaro
    Otsubo, Shun
    Jia, Qu
    Susumu, Yasuaki
    INFRARED TECHNOLOGY AND APPLICATIONS L, 2024, 13046