共 50 条
- [32] Parameter-efficient fine-tuning of large-scale pre-trained language models [J]. Nature Machine Intelligence, 2023, 5 : 220 - 235
- [33] Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 276 - 285
- [36] Neural Transfer Learning For Vietnamese Sentiment Analysis Using Pre-trained Contextual Language Models [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLIED NETWORK TECHNOLOGIES (ICMLANT II), 2021, : 84 - 88
- [37] Learning to Prompt for Vision-Language Models [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (09) : 2337 - 2348
- [38] Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3995 - 4007
- [39] Learning to Prompt for Vision-Language Models [J]. International Journal of Computer Vision, 2022, 130 : 2337 - 2348
- [40] Annotating Columns with Pre-trained Language Models [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503