A pre-training and self-training approach for biomedical named entity recognition

被引:20
|
作者
Gao, Shang [1 ]
Kotevska, Olivera [2 ]
Sorokine, Alexandre [3 ]
Christian, J. Blair [1 ]
机构
[1] Oak Ridge Natl Lab, Computat Sci & Engn Div, Oak Ridge, TN 37830 USA
[2] Oak Ridge Natl Lab, Comp Sci & Math Div, Oak Ridge, TN USA
[3] Oak Ridge Natl Lab, Geospatial Sci & Human Secur Div, Oak Ridge, TN USA
来源
PLOS ONE | 2021年 / 16卷 / 02期
关键词
TEXT;
D O I
10.1371/journal.pone.0246310
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Named entity recognition (NER) is a key component of many scientific literature mining tasks, such as information retrieval, information extraction, and question answering; however, many modern approaches require large amounts of labeled training data in order to be effective. This severely limits the effectiveness of NER models in applications where expert annotations are difficult and expensive to obtain. In this work, we explore the effectiveness of transfer learning and semi-supervised self-training to improve the performance of NER models in biomedical settings with very limited labeled data (250-2000 labeled samples). We first pre-train a BiLSTM-CRF and a BERT model on a very large general biomedical NER corpus such as MedMentions or Semantic Medline, and then we fine-tune the model on a more specific target NER task that has very limited training data; finally, we apply semi-supervised self-training using unlabeled data to further boost model performance. We show that in NER tasks that focus on common biomedical entity types such as those in the Unified Medical Language System (UMLS), combining transfer learning with self-training enables a NER model such as a BiLSTM-CRF or BERT to obtain similar performance with the same model trained on 3x-8x the amount of labeled data. We further show that our approach can also boost performance in a low-resource application where entities types are more rare and not specifically covered in UMLS.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] SELF-TRAINING AND PRE-TRAINING ARE COMPLEMENTARY FOR SPEECH RECOGNITION
    Xu, Qiantong
    Baevski, Alexei
    Likhomanenko, Tatiana
    Tomasello, Paden
    Conneau, Alexis
    Collobert, Ronan
    Synnaeve, Gabriel
    Auli, Michael
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3030 - 3034
  • [2] A Self-training Approach for Few-Shot Named Entity Recognition
    Qian, Yudong
    Zheng, Weiguo
    [J]. WEB AND BIG DATA, PT II, APWEB-WAIM 2022, 2023, 13422 : 183 - 191
  • [3] Rethinking Pre-training and Self-training
    Zoph, Barret
    Ghiasi, Golnaz
    Lin, Tsung-Yi
    Cui, Yin
    Liu, Hanxiao
    Cubuk, Ekin D.
    Le, Quoc V.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Coarse-to-Fine Pre-training for Named Entity Recognition
    Xue, Mengge
    Yu, Bowen
    Zhang, Zhenyu
    Liu, Tingwen
    Zhang, Yue
    Bin Wang
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 6345 - 6354
  • [5] Virus Named Entity Recognition based on Pre-training Model
    Mou, Hanlin
    Zheng, Shanshan
    Wu, Haifang
    Li, Bojing
    He, Tingting
    Jiang, Xingpeng
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 473 - 476
  • [6] Self-training and co-training applied to Spanish Named Entity Recognition
    Kozareva, Z
    Bonev, B
    Montoyo, A
    [J]. MICAI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3789 : 770 - 779
  • [7] PTWA: Pre-training with Word Attention for Chinese Named Entity Recognition
    Ma, Kaixin
    Liu, Meiling
    Zhao, Tiejun
    Zhou, Jiyun
    Yu, Yang
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Chinese named entity recognition combined active learning with self-training
    [J]. Zhong, Zhinong, 1600, National University of Defense Technology (36):
  • [9] GeoNER:Geological Named Entity Recognition with Enriched Domain Pre-Training Model and Adversarial Training
    MA Kai
    HU Xinxin
    TIAN Miao
    TAN Yongjian
    ZHENG Shuai
    TAO Liufeng
    QIU Qinjun
    [J]. Acta Geologica Sinica(English Edition), 2024, 98 (05) : 1404 - 1417
  • [10] Low-Resource Named Entity Recognition via the Pre-Training Model
    Chen, Siqi
    Pei, Yijie
    Ke, Zunwang
    Silamu, Wushour
    [J]. SYMMETRY-BASEL, 2021, 13 (05):