A Survey on Vision Transformer

被引:1566
|
作者
Han, Kai [1 ]
Wang, Yunhe [1 ]
Chen, Hanting [1 ,2 ]
Chen, Xinghao [1 ]
Guo, Jianyuan [1 ]
Liu, Zhenhua [1 ,2 ]
Tang, Yehui [1 ,2 ]
Xiao, An [1 ]
Xu, Chunjing [1 ]
Xu, Yixing [1 ]
Yang, Zhaohui [1 ,2 ]
Zhang, Yiman
Tao, Dacheng [3 ]
机构
[1] Huawei Noahs Ark Lab, Beijing 100084, Peoples R China
[2] Peking Univ, Sch EECS, Beijing 100871, Peoples R China
[3] Univ Sydney, Fac Engn, Sch Comp Sci, Darlington, NSW 2008, Australia
关键词
Transformers; Task analysis; Encoding; Computer vision; Computational modeling; Visualization; Object detection; high-level vision; low-level vision; self-attention; transformer; video;
D O I
10.1109/TPAMI.2022.3152247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.
引用
收藏
页码:87 / 110
页数:24
相关论文
共 50 条
  • [21] Building Extraction With Vision Transformer
    Wang, Libo
    Fang, Shenghui
    Meng, Xiaoliang
    Li, Rui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [22] ViTT: Vision Transformer Tracker
    Zhu, Xiaoning
    Jia, Yannan
    Jian, Sun
    Gu, Lize
    Pu, Zhang
    SENSORS, 2021, 21 (16)
  • [23] Vision Transformer with Progressive Sampling
    Yue, Xiaoyu
    Sun, Shuyang
    Kuang, Zhanghui
    Wei, Meng
    Torr, Philip
    Zhang, Wayne
    Lin, Dahua
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 377 - 386
  • [24] Vision Transformer with Deformable Attention
    Xia, Zhuofan
    Pan, Xuran
    Song, Shiji
    Li, Li Erran
    Huang, Gao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4784 - 4793
  • [25] Vision Transformer With Quadrangle Attention
    Zhang, Qiming
    Zhang, Jing
    Xu, Yufei
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3608 - 3624
  • [26] CONTINUAL LEARNING IN VISION TRANSFORMER
    Takeda, Mana
    Yanai, Keiji
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 616 - 620
  • [27] Adder Attention for Vision Transformer
    Shu, Han
    Wang, Jiahao
    Chen, Hanting
    Li, Lin
    Yang, Yujiu
    Wang, Yunhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [28] On the Faithfulness of Vision Transformer Explanations
    Wu, Junyi
    Kang, Weitai
    Tang, Hao
    Hong, Yuan
    Yan, Yan
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 10936 - 10945
  • [29] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [30] Spiking Convolutional Vision Transformer
    Talafha, Sameerah
    Rekabdar, Banafsheh
    Mousas, Christos
    Ekenna, Chinwe
    2023 IEEE 17TH INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC, 2023, : 225 - 226