共 50 条
- [31] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 8208 - 8222
- [32] Span Fine-tuning for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [33] AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 5146 - 5157
- [35] Efficient Data Learning for Open Information Extraction with Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13056 - 13063
- [36] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models COMPUTER VISION - ECCV 2024, PT XXXVI, 2025, 15094 : 346 - 365
- [37] A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model INTERSPEECH 2023, 2023, : 1958 - 1962
- [38] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
- [39] Federated Learning of Models Pre-Trained on Different Features with Consensus Graphs UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1336 - 1346