DropKey for Vision Transformer

被引:38
|
作者
Li, Bonan [1 ]
Hu, Yinhan [1 ]
Nie, Xuecheng [2 ]
Han, Congying [1 ]
Jiang, Xiangjian [3 ]
Guo, Tiande [1 ]
Liu, Luocji [2 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Meitu Inc, MT Lab, Beijing, Peoples R China
[3] Univ Cambridge, Cambridge, England
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.02174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T, VOLO, CeiT and DeiT, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.
引用
收藏
页码:22700 / 22709
页数:10
相关论文
共 50 条
  • [41] Rotary Position Embedding for Vision Transformer
    Heo, Byeongho
    Park, Song
    Han, Dongyoon
    Yun, Sangdoo
    COMPUTER VISION - ECCV 2024, PT X, 2025, 15068 : 289 - 305
  • [42] Vision Transformer for femur fracture classification
    Tanzi, Leonardo
    Audisio, Andrea
    Cirrincione, Giansalvo
    Aprato, Alessandro
    Vezzetti, Enrico
    INJURY-INTERNATIONAL JOURNAL OF THE CARE OF THE INJURED, 2022, 53 (07): : 2625 - 2634
  • [43] Continual Learning with Lifelong Vision Transformer
    Wang, Zhen
    Liu, Liu
    Duan, Yiqun
    Kong, Yajing
    Tao, Dacheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 171 - 181
  • [44] CoAtFormer: Vision Transformer with Composite Attention
    Chang, Zhiyong
    Yin, Mingjun
    Wang, Yan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 614 - 622
  • [45] Representation Learning Based on Vision Transformer
    Ran, Ruisheng
    Gao, Tianyu
    Hu, Qianwei
    Zhang, Wenfeng
    Peng, Shunshun
    Fang, Bin
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (07)
  • [46] Depth Inpainting via Vision Transformer
    Makarov, Ilya
    Borisenko, Gleb
    2021 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT PROCEEDINGS (ISMAR-ADJUNCT 2021), 2021, : 286 - 291
  • [47] Visformer: The Vision-friendly Transformer
    Chen, Zhengsu
    Xie, Lingxi
    Niu, Jianwei
    Liu, Xuefeng
    Wei, Longhui
    Tian, Qi
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 569 - 578
  • [48] Searching the Search Space of Vision Transformer
    Chen, Minghao
    Wu, Kan
    Ni, Bolin
    Peng, Houwen
    Liu, Bei
    Fu, Jianlong
    Chao, Hongyang
    Ling, Haibin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [49] MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling
    Zhang, Rui
    Shen, Lei
    IEEE TRANSACTIONS ON MAGNETICS, 2025, 61 (03)
  • [50] PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion
    Lijin, P.
    Ullah, Mohib
    Vats, Anuja
    Cheikh, Faouzi Alaya
    Kumar, G. Santhosh
    Nair, Madhu S.
    BIOMEDICAL ENGINEERING LETTERS, 2024, 14 (06) : 1421 - 1431