共 50 条
- [3] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 50 - 67
- [5] An Attention-Based Token Pruning Method for Vision Transformers ROUGH SETS, IJCRS 2022, 2022, 13633 : 274 - 288
- [6] Adaptive Attention Span in Transformers 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 331 - 335
- [7] Robustifying Token Attention for Vision Transformers 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17511 - 17522
- [8] Efficient Vision Transformers with Partial Attention COMPUTER VISION - ECCV 2024, PT LXXXIII, 2025, 15141 : 298 - 317
- [9] Fast Vision Transformers with HiLo Attention ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
- [10] DaViT: Dual Attention Vision Transformers COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 74 - 92