共 50 条
- [24] Difference between Multi-modal vs. Text Pre-trained Models in Embedding Text [J]. Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2023, 59 (01): : 48 - 56
- [25] How Different are Pre-trained Transformers for Text Ranking? [J]. ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 207 - 214
- [26] CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1253 - 1262
- [27] Pre-trained Text Embeddings for Enhanced Text-to-Speech Synthesis [J]. INTERSPEECH 2019, 2019, : 4430 - 4434
- [28] A Comparison of SVM Against Pre-trained Language Models (PLMs) for Text Classification Tasks [J]. MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2022, PT II, 2023, 13811 : 304 - 313
- [29] General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020,
- [30] BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 7377 - 7385