共 50 条
- [2] On the Power of Pre-Trained Text Representations: Models and Applications in Text Mining [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4052 - 4053
- [3] Text Detoxification using Large Pre-trained Neural Models [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7979 - 7996
- [4] EFFICIENT TEXT ANALYSIS WITH PRE-TRAINED NEURAL NETWORK MODELS [J]. 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 671 - 676
- [6] Non-Autoregressive Text Generation with Pre-trained Language Models [J]. 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 234 - 243
- [7] ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 328 - 337
- [8] Radical-vectors with pre-trained models for Chinese Text Classification [J]. 2022 EURO-ASIA CONFERENCE ON FRONTIERS OF COMPUTER SCIENCE AND INFORMATION TECHNOLOGY, FCSIT, 2022, : 12 - 15
- [10] Short-Text Classification Method with Text Features from Pre-trained Models [J]. Data Analysis and Knowledge Discovery, 2021, 5 (09): : 21 - 30