CancerGPT for few shot drug pair synergy prediction using large pretrained language models

被引:0
|
作者
Tianhao Li
Sandesh Shetty
Advaith Kamath
Ajay Jaiswal
Xiaoqian Jiang
Ying Ding
Yejin Kim
机构
[1] University of Texas at Austin,School of Information
[2] University of Massachusetts Amherst,Manning College of Information and Computer Sciences
[3] University of Texas at Austin,Department of Chemical Engineering
[4] University of Texas Health Science Center at Houston,McWilliams School of Biomedical Informatics
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Large language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology and medicine has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Here we report our proposed few-shot learning approach, which uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrate that the LLM-based prediction model achieves significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with ~ 124M parameters), is comparable to the larger fine-tuned GPT-3 model (with ~ 175B parameters). Our research contributes to tackling drug pair synergy prediction in rare tissues with limited data, and also advancing the use of LLMs for biological and medical inference tasks.
引用
收藏
相关论文
共 50 条
  • [1] CancerGPT for few shot drug pair synergy prediction using large pretrained language models
    Li, Tianhao
    Shetty, Sandesh
    Kamath, Advaith
    Jaiswal, Ajay
    Jiang, Xiaoqian
    Ding, Ying
    Kim, Yejin
    NPJ DIGITAL MEDICINE, 2024, 7 (01)
  • [2] Unsupervised and few-shot parsing from pretrained language models
    Zeng, Zhiyuan
    Xiong, Deyi
    ARTIFICIAL INTELLIGENCE, 2022, 305
  • [3] Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)
    Zeng, Zhiyuan
    Xiong, Deyi
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6995 - 7000
  • [4] Few-shot Knowledge Graph-to-Text Generation with Pretrained Language Models
    Li, Junyi
    Tang, Tianyi
    Zhao, Wayne Xin
    Wei, Zhicheng
    Yuan, Nicholas Jing
    Wen, Ji-Rong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1558 - 1568
  • [5] Refactoring Programs Using Large Language Models with Few-Shot Examples
    Shirafuji, Atsushi
    Oda, Yusuke
    Suzuki, Jun
    Morishita, Makoto
    Watanobe, Yutaka
    PROCEEDINGS OF THE 2023 30TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC 2023, 2023, : 151 - 160
  • [6] Large Language Models Enable Few-Shot Clustering
    Viswanathan, Vijay
    Gashteovski, Kiril
    Lawrence, Carolin
    Wu, Tongshuang
    Neubig, Graham
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 321 - 333
  • [7] Large Language Models are few(1)-shot Table Reasoners
    Chen, Wenhu
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1120 - 1130
  • [8] University Student Dropout Prediction Using Pretrained Language Models
    Won, Hyun-Sik
    Kim, Min-Ji
    Kim, Dohyun
    Kim, Hee-Soo
    Kim, Kang-Min
    APPLIED SCIENCES-BASEL, 2023, 13 (12):
  • [9] Few-Shot Drug Synergy Prediction With a Prior-Guided Hypernetwork Architecture
    Zhang, Qing-Qing
    Zhang, Shao-Wu
    Feng, Yue-Hua
    Shi, Jian-Yu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9709 - 9725
  • [10] Large Language Models (LLMs) Enable Few-Shot Clustering
    Vijay, Viswanathan
    Kiril, Gashteovski
    Carolin, Lawrence
    Tongshuang, Wu
    Graham, Neubig
    NEC Technical Journal, 2024, 17 (02): : 80 - 90