共 50 条
- [1] BioHanBERT: A Hanzi-aware Pre-trained Language Model for Chinese Biomedical Text Mining [J]. 2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 1415 - 1420
- [2] Using a Pre-Trained Language Model for Medical Named Entity Extraction in Chinese Clinic Text [J]. PROCEEDINGS OF 2020 IEEE 10TH INTERNATIONAL CONFERENCE ON ELECTRONICS INFORMATION AND EMERGENCY COMMUNICATION (ICEIEC 2020), 2020, : 312 - 317
- [5] FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 4513 - 4519
- [6] Improving text mining in plant health domain with GAN and/or pre-trained language model [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
- [7] ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 328 - 337
- [8] A Pre-trained Model for Chinese Medical Record Punctuation Restoration [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VII, 2024, 14431 : 101 - 112
- [9] RoBERTuito: a pre-trained language model for social media text in Spanish [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 7235 - 7243
- [10] Leveraging Pre-Trained Language Model for Summary Generation on Short Text [J]. IEEE ACCESS, 2020, 8 : 228798 - 228803