DropKey for Vision Transformer

被引:38
|
作者
Li, Bonan [1 ]
Hu, Yinhan [1 ]
Nie, Xuecheng [2 ]
Han, Congying [1 ]
Jiang, Xiangjian [3 ]
Guo, Tiande [1 ]
Liu, Luocji [2 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Meitu Inc, MT Lab, Beijing, Peoples R China
[3] Univ Cambridge, Cambridge, England
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.02174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T, VOLO, CeiT and DeiT, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.
引用
收藏
页码:22700 / 22709
页数:10
相关论文
共 50 条
  • [1] Fault Diagnosis of Gear Based on Multichannel Feature Fusion and DropKey-Vision Transformer
    Yang, Na
    Liu, Jie
    Zhao, Wei-Qiang
    Tan, Yutao
    IEEE SENSORS JOURNAL, 2024, 24 (04) : 4758 - 4770
  • [2] Gaze-Swin: Enhancing Gaze Estimation with a Hybrid CNN-Transformer Network and Dropkey Mechanism
    Zhao, Ruijie
    Wang, Yuhuan
    Luo, Sihui
    Shou, Suyao
    Tang, Pinyan
    ELECTRONICS, 2024, 13 (02)
  • [3] Vision Transformer for Pansharpening
    Meng, Xiangchao
    Wang, Nan
    Shao, Feng
    Li, Shutao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [4] A Survey on Vision Transformer
    Han, Kai
    Wang, Yunhe
    Chen, Hanting
    Chen, Xinghao
    Guo, Jianyuan
    Liu, Zhenhua
    Tang, Yehui
    Xiao, An
    Xu, Chunjing
    Xu, Yixing
    Yang, Zhaohui
    Zhang, Yiman
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 87 - 110
  • [5] Peripheral Vision Transformer
    Min, Juhong
    Zhao, Yucheng
    Luo, Chong
    Cho, Minsu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Super Vision Transformer
    Lin, Mingbao
    Chen, Mengzhao
    Zhang, Yuxin
    Shen, Chunhua
    Ji, Rongrong
    Cao, Liujuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (12) : 3136 - 3151
  • [7] Dual Vision Transformer
    Yao, Ting
    Li, Yehao
    Pan, Yingwei
    Wang, Yu
    Zhang, Xiao-Ping
    Mei, Tao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 10870 - 10882
  • [8] Super Vision Transformer
    Mingbao Lin
    Mengzhao Chen
    Yuxin Zhang
    Chunhua Shen
    Rongrong Ji
    Liujuan Cao
    International Journal of Computer Vision, 2023, 131 : 3136 - 3151
  • [9] Vicinity Vision Transformer
    Sun W.
    Qin Z.
    Deng H.
    Wang J.
    Zhang Y.
    Zhang K.
    Barnes N.
    Birchfield S.
    Kong L.
    Zhong Y.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45 (10) : 12635 - 12649
  • [10] Sufficient Vision Transformer
    Cheng, Zhi
    Su, Xiu
    Wang, Xueyu
    You, Shan
    Xu, Chang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 190 - 200