共 50 条
- [11] Rethinking the Self-Attention in Vision Transformers 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3065 - 3069
- [13] KVT: κ-NN Attention for Boosting Vision Transformers COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 285 - 302
- [15] Self-attention in vision transformers performs perceptual grouping, not attention FRONTIERS IN COMPUTER SCIENCE, 2023, 5
- [16] Visual Transformers: Where Do Transformers Really Belong in Vision Models? 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 579 - 589
- [17] Less is More: Pay Less Attention in Vision Transformers THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2035 - 2043
- [20] Twins: Revisiting the Design of Spatial Attention in Vision Transformers ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,