共 50 条
- [21] On the importance of pre-training data volume for compact language models [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7853 - 7858
- [23] Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect [J]. FRONTIERS IN BIG DATA, 2021, 4
- [24] Rethinking ImageNet Pre-training [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4917 - 4926
- [25] Pre-Training to Learn in Context [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4849 - 4870
- [26] Improving Fractal Pre-training [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2412 - 2421
- [27] Pre-training via Paraphrasing [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
- [28] THE PRE-TRAINING SELECTION OF TEACHERS [J]. JOURNAL OF EDUCATIONAL RESEARCH, 1934, 28 (02): : 92 - 117
- [29] A comparison of supervised and unsupervised pre-training of end-to-end models [J]. INTERSPEECH 2021, 2021, : 731 - 735
- [30] Photo Pre-Training, But for Sketch [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2754 - 2764