Pre-trained Language Models in Biomedical Domain: A Systematic Survey

被引:32
|
作者
Wang, Benyou [1 ]
Xie, Qianqian [2 ]
Pei, Jiahuan [3 ]
Chen, Zhihong [1 ]
Tiwari, Prayag [4 ]
Li, Zhao [5 ]
Fu, Jie [6 ]
机构
[1] Chinese Univ Hong Kong, Shenzhen, Peoples R China
[2] Univ Manchester, Dept Comp Sci, Manchester, Lancs, England
[3] Univ Amsterdam, Amsterdam, Netherlands
[4] Halmstad Univ, Sch Informat Technol, Halmstad, Sweden
[5] Univ Texas Hlth Sci Ctr Houston, Houston, TX 77030 USA
[6] Univ Montreal, Montreal, PQ, Canada
关键词
Biomedical domain; pre-trained language models; natural language processing; TRANSFORMERS; RESOURCE; CORPUS;
D O I
10.1145/3611651
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing tasks. This also benefits the biomedical domain: researchers from informatics, medicine, and computer science communities propose various PLMs trained on biomedical datasets, e.g., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks. However, the cross-discipline characteristics of biomedical PLMs hinder their spreading among communities; some existing works are isolated from each other without comprehensive comparison and discussions. It is nontrivial to make a survey that not only systematically reviews recent advances in biomedical PLMs and their applications but also standardizes terminology and benchmarks. This article summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in downstream biomedical tasks. Particularly, we discuss the motivations of PLMs in the biomedical domain and introduce the key concepts of pre-trained language models. We then propose a taxonomy of existing biomedical PLMs that categorizes them from various perspectives systematically. Plus, their applications in biomedical downstream tasks are exhaustively discussed, respectively. Last, we illustrate various limitations and future trends, which aims to provide inspiration for the future research.
引用
收藏
页数:52
相关论文
共 50 条
  • [31] Zero-shot domain paraphrase with unaligned pre-trained language models
    Chen, Zheng
    Yuan, Hu
    Ren, Jiankun
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 1097 - 1110
  • [32] Cross-Domain Authorship Attribution Using Pre-trained Language Models
    Barlas, Georgios
    Stamatatos, Efstathios
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2020, PT I, 2020, 583 : 255 - 266
  • [33] Continual Learning with Pre-Trained Models: A Survey
    Zhou, Da-Wei
    Sun, Hai-Long
    Ning, Jingyi
    Ye, Han-Jia
    Zhan, De-Chuan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8363 - 8371
  • [34] A Survey on Recent Advances in Keyphrase Extraction from Pre-trained Language Models
    Song, Mingyang
    Feng, Yi
    Jing, Liping
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2153 - 2164
  • [35] Enhancing Domain Modeling with Pre-trained Large Language Models: An Automated Assistant for Domain Modelers
    Prokop, Dominik
    Stenchlak, Stepan
    Skoda, Petr
    Klimek, Jakub
    Necasky, Martin
    CONCEPTUAL MODELING, ER 2024, 2025, 15238 : 235 - 253
  • [36] BioBERT: a pre-trained biomedical language representation model for biomedical text mining
    Lee, Jinhyuk
    Yoon, Wonjin
    Kim, Sungdong
    Kim, Donghyeon
    Kim, Sunkyu
    So, Chan Ho
    Kang, Jaewoo
    BIOINFORMATICS, 2020, 36 (04) : 1234 - 1240
  • [37] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [38] Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey
    Min, Bonan
    Ross, Hayley
    Sulem, Elior
    Ben Veyseh, Amir Pouran
    Nguyen, Thien Huu
    Sainz, Oscar
    Agirre, Eneko
    Heintz, Ilana
    Roth, Dan
    ACM COMPUTING SURVEYS, 2024, 56 (02)
  • [39] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
    Xu, Weiwen
    Li, Xin
    Zhang, Wenxuan
    Zhou, Meng
    Lam, Wai
    Si, Luo
    Bing, Lidong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880