Scaling Local Self-Attention for Parameter Efficient Visual Backbones

被引:261
|
作者
Vaswani, Ashish [1 ]
Ramachandran, Prajit [1 ]
Srinivas, Aravind [2 ]
Parmar, Niki [1 ]
Hechtman, Blake [1 ]
Shlens, Jonathon [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Univ Calif Berkeley, Berkeley, CA USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.01270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to self-attention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameter-limited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutions.(1)
引用
收藏
页码:12889 / 12899
页数:11
相关论文
共 50 条
  • [1] Local self-attention in transformer for visual question answering
    Shen, Xiang
    Han, Dezhi
    Guo, Zihan
    Chen, Chongqing
    Hua, Jie
    Luo, Gaofeng
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
  • [2] Local self-attention in transformer for visual question answering
    Xiang Shen
    Dezhi Han
    Zihan Guo
    Chongqing Chen
    Jie Hua
    Gaofeng Luo
    Applied Intelligence, 2023, 53 : 16706 - 16723
  • [3] Local Self-Attention over Long Text for Efficient Document Retrieval
    Hofstaetter, Sebastian
    Zamani, Hamed
    Mitra, Bhaskar
    Craswell, Nick
    Hanbury, Allan
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 2021 - 2024
  • [4] Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification
    Wang, Qingyu
    Zhang, Duzhen
    Cai, Xinyuan
    Zhang, Tielin
    Xu, Bo
    FRONTIERS IN NEUROSCIENCE, 2025, 18
  • [5] Exploring Self-Attention for Visual Intersection Classification
    Nakata, Haruki
    Tanaka, Kanji
    Takeda, Koji
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (03) : 386 - 393
  • [6] Parameter efficient finetuning of text-to-image models with trainable self-attention layer
    Li, Zhuoyuan
    Sun, Yi
    IMAGE AND VISION COMPUTING, 2024, 151
  • [7] What Limits the Performance of Local Self-attention?
    Zhou, Jingkai
    Wang, Pichao
    Tang, Jiasheng
    Wang, Fan
    Liu, Qiong
    Li, Hao
    Jin, Rong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (10) : 2516 - 2528
  • [8] What Limits the Performance of Local Self-attention?
    Jingkai Zhou
    Pichao Wang
    Jiasheng Tang
    Fan Wang
    Qiong Liu
    Hao Li
    Rong Jin
    International Journal of Computer Vision, 2023, 131 : 2516 - 2528
  • [9] Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention
    Fethi Ghazouani
    Pierre Vera
    Su Ruan
    International Journal of Computer Assisted Radiology and Surgery, 2024, 19 : 273 - 281
  • [10] Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention
    Ghazouani, Fethi
    Vera, Pierre
    Ruan, Su
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 19 (2) : 273 - 281