共 50 条
- [41] Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22680 - 22689
- [42] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 50 - 67
- [43] TAQ: TOP-K ATTENTION-AWARE QUANTIZATION FOR VISION TRANSFORMERS 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1750 - 1754
- [44] Vision Transformers with Cross-Attention Pyramids for Class-Agnostic Counting 2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP, 2024, : 689 - 695
- [46] ADA-VIT: ATTENTION-GUIDED DATA AUGMENTATION FOR VISION TRANSFORMERS 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 385 - 389
- [48] Brain encoding models based on multimodal transformers can transfer across language and vision ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [49] Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI Scientific Reports, 14