共 50 条
- [41] Learning and Evaluating a Differentially Private Pre-trained Language Model FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1178 - 1189
- [42] TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): PROCEEDINGS OF SYSTEM DEMONSTRATIONS, 2022, : 35 - 43
- [43] Multimodal Search on Iconclass using Vision-Language Pre-Trained Models 2023 ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES, JCDL, 2023, : 285 - 287
- [45] Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3995 - 4007
- [47] Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners 2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23881 - 23890
- [48] A Simple Baseline for Open-Vocabulary Semantic Segmentation with Pre-trained Vision-Language Model COMPUTER VISION, ECCV 2022, PT XXIX, 2022, 13689 : 736 - 753
- [49] Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 37 (02): : 1673 - 1689