共 50 条
- [31] Align, Reason and Learn: Enhancing Medical Vision-and-Language Pre-training with Knowledge PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5152 - 5161
- [33] Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5120 - 5131
- [34] Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia, 2023, : 5120 - 5131
- [35] ST-BERT: CROSS-MODAL LANGUAGE MODEL PRE-TRAINING FOR END-TO-END SPOKEN LANGUAGE UNDERSTANDING 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7478 - 7482
- [36] QUERT: Continual Pre-training of Language Model for Query Understanding in Travel Domain Search PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5282 - 5291
- [38] An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4854 - 4864
- [40] Bridging the Gap between Recognition-level Pre-training and Commonsensical Vision-language Tasks PROCEEDINGS OF THE FIRST WORKSHOP ON COMMONSENSE REPRESENTATION AND REASONING (CSRR 2022), 2022, : 23 - 35