共 50 条
- [1] SELF-TRAINING AND PRE-TRAINING ARE COMPLEMENTARY FOR SPEECH RECOGNITION [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3030 - 3034
- [2] A Self-training Approach for Few-Shot Named Entity Recognition [J]. WEB AND BIG DATA, PT II, APWEB-WAIM 2022, 2023, 13422 : 183 - 191
- [3] Rethinking Pre-training and Self-training [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
- [4] Coarse-to-Fine Pre-training for Named Entity Recognition [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 6345 - 6354
- [5] Virus Named Entity Recognition based on Pre-training Model [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 473 - 476
- [6] Self-training and co-training applied to Spanish Named Entity Recognition [J]. MICAI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3789 : 770 - 779
- [7] PTWA: Pre-training with Word Attention for Chinese Named Entity Recognition [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
- [8] Chinese named entity recognition combined active learning with self-training [J]. Zhong, Zhinong, 1600, National University of Defense Technology (36):
- [10] Low-Resource Named Entity Recognition via the Pre-Training Model [J]. SYMMETRY-BASEL, 2021, 13 (05):