DropKey for Vision Transformer

被引:38
|
作者
Li, Bonan [1 ]
Hu, Yinhan [1 ]
Nie, Xuecheng [2 ]
Han, Congying [1 ]
Jiang, Xiangjian [3 ]
Guo, Tiande [1 ]
Liu, Luocji [2 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Meitu Inc, MT Lab, Beijing, Peoples R China
[3] Univ Cambridge, Cambridge, England
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.02174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T, VOLO, CeiT and DeiT, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.
引用
收藏
页码:22700 / 22709
页数:10
相关论文
共 50 条
  • [31] Video Summarization With Spatiotemporal Vision Transformer
    Hsu, Tzu-Chun
    Liao, Yi-Sheng
    Huang, Chun-Rong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3013 - 3026
  • [32] Ensemble Vision Transformer for Dementia Diagnosis
    Huang, Fei
    Qiu, Anqi
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5551 - 5561
  • [33] Self-slimmed Vision Transformer
    Zong, Zhuofan
    Li, Kunchang
    Song, Guanglu
    Wang, Yali
    Qiao, Yu
    Leng, Biao
    Liu, Yu
    COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 432 - 448
  • [34] The Application of Vision Transformer in Image Classification
    He, Zhixuan
    2022 THE 6TH INTERNATIONAL CONFERENCE ON VIRTUAL AND AUGMENTED REALITY SIMULATIONS, ICVARS 2022, 2022, : 56 - 63
  • [35] What Makes for Hierarchical Vision Transformer?
    Fang, Yuxin
    Wang, Xinggang
    Wu, Rui
    Liu, Wenyu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12714 - 12720
  • [36] SPVT: Spiked Pyramid Vision Transformer
    Guo, Yazhuo
    Qin, Yuhan
    Chen, Song
    Kang, Yi
    2024 IEEE 6TH INTERNATIONAL CONFERENCE ON AI CIRCUITS AND SYSTEMS, AICAS 2024, 2024, : 110 - 113
  • [37] IMAGE STEGANALYSIS WITH CONVOLUTIONAL VISION TRANSFORMER
    Luo, Ge
    Wei, Ping
    Zhu, Shuwen
    Zhang, Xinpeng
    Qian, Zhenxing
    Li, Sheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3089 - 3093
  • [38] Lightweight Vision Transformer with Bidirectional Interaction
    Fan, Qihang
    Huang, Huaibo
    Zhou, Xiaoqiang
    He, Ran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Glance-and-Gaze Vision Transformer
    Yu, Qihang
    Xia, Yingda
    Bai, Yutong
    Lu, Yongyi
    Yuille, Alan
    Shen, Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [40] E(2)-Equivariant Vision Transformer
    Xu, Renjun
    Yang, Kaifan
    Liu, Ke
    He, Fengxiang
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2356 - 2366