共 50 条
- [1] BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
- [2] Self-training Improves Pre-training for Natural Language Understanding [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5408 - 5418
- [3] Unified Pre-training for Program Understanding and Generation [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 2655 - 2668
- [4] Multimodal Pre-training Method for Vision-language Understanding and Generation [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (05): : 2024 - 2034
- [5] Cross-Lingual Natural Language Generation via Pre-Training [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7570 - 7577
- [6] UNILMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
- [7] MPNet: Masked and Permuted Pre-training for Language Understanding [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
- [8] SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1897 - 1907
- [9] Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1006 - 1015
- [10] Speech Model Pre-training for End-to-End Spoken Language Understanding [J]. INTERSPEECH 2019, 2019, : 814 - 818