Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks

被引:5
|
作者
Luo, Ling [1 ,2 ]
Ning, Jinzhong [1 ]
Zhao, Yingwen [1 ]
Wang, Zhijun [1 ]
Ding, Zeyuan [1 ]
Chen, Peng [1 ]
Fu, Weiru [1 ]
Han, Qinyu [1 ]
Xu, Guangtao [1 ]
Qiu, Yunzhi [1 ]
Pan, Dinghao [1 ]
Li, Jiru [1 ]
Li, Hao [1 ]
Feng, Wenduo [1 ]
Tu, Senbo [1 ]
Liu, Yuqi [1 ]
Yang, Zhihao [1 ]
Wang, Jian [1 ]
Sun, Yuanyuan [1 ]
Lin, Hongfei [1 ]
机构
[1] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Sch Comp Sci & Technol, 2 Linggong Rd, Ganjingzi Dist, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
natural language processing; large language model; supervised fine-tuning; biomedical multitasking;
D O I
10.1093/jamia/ocae037
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical natural language processing (NLP) tasks in different languages, we present Taiyi, a bilingual fine-tuned LLM for diverse biomedical NLP tasks.Materials and Methods We first curated a comprehensive collection of 140 existing biomedical text mining datasets (102 English and 38 Chinese datasets) across over 10 task types. Subsequently, these corpora were converted to the instruction data used to fine-tune the general LLM. During the supervised fine-tuning phase, a 2-stage strategy is proposed to optimize the model performance across various tasks.Results Experimental results on 13 test sets, which include named entity recognition, relation extraction, text classification, and question answering tasks, demonstrate that Taiyi achieves superior performance compared to general LLMs. The case study involving additional biomedical NLP tasks further shows Taiyi's considerable potential for bilingual biomedical multitasking.Conclusion Leveraging rich high-quality biomedical corpora and developing effective fine-tuning strategies can significantly improve the performance of LLMs within the biomedical domain. Taiyi shows the bilingual multitasking capability through supervised fine-tuning. However, those tasks such as information extraction that are not generation tasks in nature remain challenging for LLM-based generative approaches, and they still underperform the conventional discriminative approaches using smaller language models.
引用
收藏
页码:1865 / 1874
页数:10
相关论文
共 50 条
  • [1] CentralBankRoBERTa: A fine-tuned large language model for central bank communications☆
    Pfeifer, Moritz
    Marohl, Vincent P.
    JOURNAL OF FINANCE AND DATA SCIENCE, 2023, 9
  • [3] EpilepsyLLM: Domain-Specific Large Language Model Fine-tuned with Epilepsy Medical Knowledge
    Zhao, Xuyang
    Zhao, Qibin
    Tanaka, Toshihisa
    arXiv,
  • [4] Website Category Classification Using Fine-tuned BERT Language Model
    Demirkiran, Ferhat
    Cayir, Aykut
    Unal, Ugur
    Dag, Hasan
    2020 5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2020, : 333 - 336
  • [5] Fingerprinting Fine-tuned Language Models in the Wild
    Diwan, Nirav
    Chakravorty, Tanmoy
    Shafiq, Zubair
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4652 - 4664
  • [6] Arabic sarcasm detection: An enhanced fine-tuned language model approach
    Galal, Mohamed A.
    Yousef, Ahmed Hassan
    Zayed, Hala H.
    Medhat, Walaa
    AIN SHAMS ENGINEERING JOURNAL, 2024, 15 (06)
  • [7] Extracting structured data from organic synthesis procedures using a fine-tuned large language model
    Ai, Qianxiang
    Meng, Fanwang
    Shi, Jiale
    Pelkie, Brenden
    Coley, Connor W.
    DIGITAL DISCOVERY, 2024, 3 (09): : 1822 - 1831
  • [8] The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports
    Kanemaru, Noriko
    Yasaka, Koichiro
    Fujita, Nana
    Kanzawa, Jun
    Abe, Osamu
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2024, : 865 - 872
  • [9] Fine-Tuned BERT Model for Large Scale and Cognitive Classification of MOOCs
    Sebbaq, Hanane
    El Faddouli, Nour-eddine
    INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTRIBUTED LEARNING, 2022, 23 (02): : 170 - 190
  • [10] AirBERT: A fine-tuned language representation model for airlines tweet sentiment analysis
    Yenkikar, Anuradha
    Babu, C. Narendra
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2023, 17 (02): : 435 - 455