共 50 条
- [31] Evolving Deep Architectures: A New Blend of CNNs and Transformers Without Pre-training Dependencies DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 163 - 175
- [33] Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 11156 - 11172
- [34] Multi-stage Pre-training over Simplified Multimodal Pre-training Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
- [35] Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 5426 - 5435
- [36] ET-BERT: A Contextualized Datagram Representation with Pre-training Transformers for Encrypted Traffic Classification PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 633 - 642
- [37] Rethinking ImageNet Pre-training 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4917 - 4926
- [38] Photo Pre-Training, But for Sketch 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2754 - 2764
- [39] Pre-Training to Learn in Context PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4849 - 4870
- [40] mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1978 - 2008