共 50 条
- [1] Adaptive Token Sampling for Efficient Vision Transformers COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 396 - 414
- [2] Dynamic Token Pruning in Plain Vision Transformers for Semantic Segmentation 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 777 - 786
- [3] An Attention-Based Token Pruning Method for Vision Transformers ROUGH SETS, IJCRS 2022, 2022, 13633 : 274 - 288
- [4] Learned Token Pruning for Transformers PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 784 - 794
- [5] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [6] Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2092 - 2101
- [7] Making Vision Transformers Efficient from A Token Sparsification View 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6195 - 6205
- [8] ALGM: Adaptive Local-then-Global Token Merging for Efficient Semantic Segmentation with Plain Vision Transformers 2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 15773 - 15782
- [9] Make a Long Image Short: Adaptive Token Length for Vision Transformers MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 69 - 85