CancerGPT for few shot drug pair synergy prediction using large pretrained language models

被引:0
|
作者
Tianhao Li
Sandesh Shetty
Advaith Kamath
Ajay Jaiswal
Xiaoqian Jiang
Ying Ding
Yejin Kim
机构
[1] University of Texas at Austin,School of Information
[2] University of Massachusetts Amherst,Manning College of Information and Computer Sciences
[3] University of Texas at Austin,Department of Chemical Engineering
[4] University of Texas Health Science Center at Houston,McWilliams School of Biomedical Informatics
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Large language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology and medicine has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Here we report our proposed few-shot learning approach, which uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrate that the LLM-based prediction model achieves significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with ~ 124M parameters), is comparable to the larger fine-tuned GPT-3 model (with ~ 175B parameters). Our research contributes to tackling drug pair synergy prediction in rare tissues with limited data, and also advancing the use of LLMs for biological and medical inference tasks.
引用
收藏
相关论文
共 50 条
  • [21] Language Models are Few-Shot Butlers
    Micheli, Vincent
    Fleuret, Francois
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9312 - 9318
  • [22] Prompt engineering for zero-shot and few-shot defect detection and classification using a visual-language pretrained model
    Yong, Gunwoo
    Jeon, Kahyun
    Gil, Daeyoung
    Lee, Ghang
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2023, 38 (11) : 1536 - 1554
  • [23] Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning
    Shi, Xiaoming
    Xue, Siqiao
    Wang, Kangrui
    Zhou, Fan
    Zhang, James Y.
    Zhou, Jun
    Tan, Chenhao
    Mei, Hongyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] Large Product Key Memory for Pretrained Language Models
    Kim, Gyuwan
    Jung, Tae-Hwan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4060 - 4069
  • [25] A few-shot learning method based on knowledge graph in large language models
    Wang, Feilong
    Shi, Donghui
    Aguilar, Jose
    Cui, Xinyi
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [26] A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models
    Iguez, Julio Silva-Rodr
    Hajimiri, Sina
    Ben Ayed, Ismail
    Dolz, Jose
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23681 - 23690
  • [27] Using large language models to investigate cultural ecosystem services perceptions: A few-shot and prompt method
    Luo, Hanyue
    Zhang, Zhiduo
    Zhu, Qing
    Ben Ameur, Nour El Houda
    Liu, Xiao
    Ding, Fan
    Cai, Yongli
    LANDSCAPE AND URBAN PLANNING, 2025, 258
  • [29] Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research
    Meshkin, Hamed
    Zirkle, Joel
    Arabidarrehdor, Ghazal
    Chaturbedi, Anik
    Chakravartula, Shilpa
    Mann, John
    Thrasher, Bradlee
    Li, Zhihua
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (05)
  • [30] Few-shot Subgoal Planning with Language Models
    Logeswaran, Lajanugen
    Fu, Yao
    Lee, Moontae
    Lee, Honglak
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5493 - 5506