Super Vision Transformer

被引:8
|
作者
Lin, Mingbao [1 ,2 ]
Chen, Mengzhao [1 ]
Zhang, Yuxin [1 ]
Shen, Chunhua [3 ]
Ji, Rongrong [1 ]
Cao, Liujuan [1 ]
机构
[1] Minist Educ China, Sch Informat, Key Lab Multimedia Trusted Percept & Efficient Com, Xiamen, Peoples R China
[2] Tencent Youtu Lab, Shanghai, Peoples R China
[3] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Hardware efficiency; Supernet; Vision transformer;
D O I
10.1007/s11263-023-01861-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We attempt to reduce the computational costs in vision transformers (ViTs), which increase quadratically in the token number. We present a novel training paradigm that trains only one ViT model at a time, but is capable of providing improved image recognition performance with various computational costs. Here, the trained ViT model, termed super vision transformer (SuperViT), is empowered with the versatile ability to solve incoming patches of multiple sizes as well as preserve informative tokens with multiple keeping rates (the ratio of keeping tokens) to achieve good hardware efficiency for inference, given that the available hardware resources often change from time to time. Experimental results on ImageNet demonstrate that our SuperViT can considerably reduce the computational costs of ViT models with even performance increase. For example, we reduce 2 x FLOPs of DeiT-S while increasing the Top-1 accuracy by 0.2% and 0.7% for 1.5 x reduction. Also, our SuperViT significantly outperforms existing studies on efficient vision transformers. For example, when consuming the same amount of FLOPs, our SuperViT surpasses the recent state-of-the-art EViT by 1.1% when using DeiT-S as their backbones. The project of this work is made publicly available at https://github.com/lmbxmu/SuperViT.
引用
收藏
页码:3136 / 3151
页数:16
相关论文
共 50 条
  • [1] Super Vision Transformer
    Mingbao Lin
    Mengzhao Chen
    Yuxin Zhang
    Chunhua Shen
    Rongrong Ji
    Liujuan Cao
    International Journal of Computer Vision, 2023, 131 : 3136 - 3151
  • [2] Audio super-resolution via vision transformer
    Nistico, Simona
    Palopoli, Luigi
    Romano, Adele Pia
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2024, 62 (04) : 1071 - 1085
  • [3] Audio Super-Resolution via Vision Transformer
    Nistico, Simona
    Palopoli, Luigi
    Romano, Adele Pia
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2022), 2022, 13515 : 378 - 387
  • [4] SVTSR: image super-resolution using scattering vision transformer
    Liang, Jiabao
    Jin, Yutao
    Chen, Xiaoyan
    Huang, Haotian
    Deng, Yue
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [5] Ultrasound Super Resolution using Vision Transformer with Convolution Projection Operation
    Liu, Xilun
    Almekkawy, Mohamed
    2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS), 2022,
  • [6] A Robust Super-Resolution DoA Estimation Algorithm Using Vision Transformer
    Yu, Hao
    Wu, Sheng
    Lv, Liujie
    Su, Yi
    2024 13TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS, ICCCAS 2024, 2024, : 413 - 417
  • [7] Super Vision
    Morgan, Robert
    APPALACHIAN JOURNAL, 2017, 44 (1-2) : 147 - 147
  • [8] SUPER VISION
    Meralitimes, Zeeya
    NATURE, 2015, 518 (7538) : 158 - 160
  • [9] Super vision
    Launer, John
    POSTGRADUATE MEDICAL JOURNAL, 2009, 85 (1004) : 335 - 336
  • [10] Super vision
    Kan, Henry
    MEDICAL JOURNAL OF AUSTRALIA, 2006, 185 (11-12) : 686 - 686