共 50 条
- [41] Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22680 - 22689
- [42] TAQ: TOP-K ATTENTION-AWARE QUANTIZATION FOR VISION TRANSFORMERS 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1750 - 1754
- [43] Vision Transformers with Cross-Attention Pyramids for Class-Agnostic Counting 2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP, 2024, : 689 - 695
- [44] ADA-VIT: ATTENTION-GUIDED DATA AUGMENTATION FOR VISION TRANSFORMERS 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 385 - 389
- [46] Make a Long Image Short: Adaptive Token Length for Vision Transformers MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 69 - 85
- [47] Domain-Adaptive Vision Transformers for Generalizing Across Visual Domains IEEE ACCESS, 2023, 11 : 115644 - 115653
- [49] HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers 2023 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA, 2023, : 442 - 455