共 50 条
- [1] Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [2] How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models? [J]. International Journal of Computer Vision, 2024, 132 : 596 - 609
- [5] Weak Distribution Detectors Lead to Stronger Generalizability of Vision-Language Prompt Tuning [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1528 - 1536
- [6] A Stable Vision Transformer for Out-of-Distribution Generalization [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 328 - 339
- [7] Effectiveness assessment of recent large vision-language models [J]. Visual Intelligence, 2 (1):
- [8] On Evaluating Adversarial Robustness of Large Vision-Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [9] MixPrompt: Enhancing Generalizability and Adversarial Robustness for Vision-Language Models via Prompt Fusion [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IX, ICIC 2024, 2024, 14870 : 328 - 339
- [10] Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4928 - 4938