DropKey for Vision Transformer

被引:38
|
作者
Li, Bonan [1 ]
Hu, Yinhan [1 ]
Nie, Xuecheng [2 ]
Han, Congying [1 ]
Jiang, Xiangjian [3 ]
Guo, Tiande [1 ]
Liu, Luocji [2 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Meitu Inc, MT Lab, Beijing, Peoples R China
[3] Univ Cambridge, Cambridge, England
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.02174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T, VOLO, CeiT and DeiT, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.
引用
收藏
页码:22700 / 22709
页数:10
相关论文
共 50 条
  • [21] Towards Robust Vision Transformer
    Mao, Xiaofeng
    Qi, Gege
    Chen, Yuefeng
    Li, Xiaodan
    Duan, Ranjie
    Ye, Shaokai
    He, Yuan
    Xue, Hui
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12032 - 12041
  • [22] A lightweight vision transformer with symmetric modules for vision tasks
    Liang, Shengjun
    Yu, Mingxin
    Lu, Wenshuai
    Ji, Xinglong
    Tang, Xiongxin
    Liu, Xiaolin
    You, Rui
    INTELLIGENT DATA ANALYSIS, 2023, 27 (06) : 1741 - 1757
  • [23] FLatten Transformer: Vision Transformer using Focused Linear Attention
    Han, Dongchen
    Pan, Xuran
    Han, Yizeng
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5938 - 5948
  • [24] Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
    Liu, Ze
    Lin, Yutong
    Cao, Yue
    Hu, Han
    Wei, Yixuan
    Zhang, Zheng
    Lin, Stephen
    Guo, Baining
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9992 - 10002
  • [25] Orthogonal Transformer: An Efficient Vision Transformer Backbone with Token Orthogonalization
    Huang, Huaibo
    Zhou, Xiaoqiang
    He, Ran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [26] Survey of Vision Transformer in Low-Level Computer Vision
    Zhu, Kai
    Li, Li
    Zhang, Tong
    Jiang, Sheng
    Bie, Yiming
    Computer Engineering and Applications, 2024, 60 (04) : 39 - 56
  • [27] ViTO: Vision Transformer-Operator
    Ovadia, Oded
    Kahana, Adar
    Stinis, Panos
    Turkel, Eli
    Givoli, Dan
    Karniadakis, George Em
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2024, 428
  • [28] Vision Transformer in Industrial Visual Inspection
    Hutten, Nils
    Meyes, Richard
    Meisen, Tobias
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [29] Survey of Transformer Research in Computer Vision
    Li, Xiang
    Zhang, Tao
    Zhang, Zhe
    Wei, Hongyang
    Qian, Yurong
    Computer Engineering and Applications, 2023, 59 (01) : 1 - 14
  • [30] ViTAS: Vision Transformer Architecture Search
    Su, Xiu
    You, Shan
    Xie, Jiyang
    Zheng, Mingkai
    Wang, Fei
    Qian, Chen
    Zhang, Changshui
    Wang, Xiaogang
    Xu, Chang
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 139 - 157