共 50 条
- [41] Bi-tuning: Efficient Transfer from Pre-trained Models [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 357 - 373
- [42] EFFICIENT UTILIZATION OF LARGE PRE-TRAINED MODELS FOR LOW RESOURCE ASR [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
- [44] Neural Transfer Learning For Vietnamese Sentiment Analysis Using Pre-trained Contextual Language Models [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLIED NETWORK TECHNOLOGIES (ICMLANT II), 2021, : 84 - 88
- [45] Vulnerability Analysis of Continuous Prompts for Pre-trained Language Models [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IX, 2023, 14262 : 508 - 519
- [46] Transformer over Pre-trained Transformer for Neural Text Segmentation with Enhanced Topic Coherence [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3334 - 3340
- [47] Difference between Multi-modal vs. Text Pre-trained Models in Embedding Text [J]. Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2023, 59 (01): : 48 - 56
- [48] Neural machine translation of clinical text: an empirical investigation into multilingual pre-trained language models and transfer-learning [J]. FRONTIERS IN DIGITAL HEALTH, 2024, 6
- [49] How Different are Pre-trained Transformers for Text Ranking? [J]. ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 207 - 214
- [50] CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1253 - 1262