On the Integration of Self-Attention and Convolution

被引:169
|
作者
Pan, Xuran [1 ]
Ge, Chunjiang [1 ]
Lu, Rui [1 ]
Song, Shiji [1 ]
Chen, Guanfu [2 ]
Huang, Zeyi [2 ]
Huang, Gao [1 ,3 ]
机构
[1] Tsinghua Univ, Dept Automat, BNRist, Beijing, Peoples R China
[2] Huawei Technol Ltd, Shenzhen, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00089
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolution and self-attention are two powerful techniques for representation learning, and they are usually considered as two peer approaches that are distinct from each other. In this paper, we show that there exists a strong underlying relation between them, in the sense that the bulk of computations of these two paradigms are in fact done with the same operation. Specifically, we first show that a traditional convolution with kernel size k x k can be decomposed into k(2) individual 1 x 1 convolutions, followed by shift and summation operations. Then, we interpret the projections of queries, keys, and values in self-attention module as multiple 1 x 1 convolutions, followed by the computation of attention weights and aggregation of the values. Therefore, the first stage of both two modules comprises the similar operation. More importantly, the first stage contributes a dominant computation complexity (square of the channel size) comparing to the second stage. This observation naturally leads to an elegant integration of these two seemingly distinct paradigms, i.e., a mixed model that enjoys the benefit of both self-Attention and Convolution (ACmix), while having minimum computational overhead compared to the pure convolution or self-attention counterpart. Extensive experiments show that our model achieves consistently improved results over competitive baselines on image recognition and downstream tasks. Code and pre-trained models will be released at https ://github.com/LeapLabTHU/ACmix and https://gitee.com/mindspore/models.
引用
下载
收藏
页码:805 / 815
页数:11
相关论文
共 50 条
  • [11] 3D-ShuffleViT: An Efficient Video Action Recognition Network with Deep Integration of Self-Attention and Convolution
    Wang, Yinghui
    Zhu, Anlei
    Ma, Haomiao
    Ai, Lingyu
    Song, Wei
    Zhang, Shaojie
    MATHEMATICS, 2023, 11 (18)
  • [12] Condensed Convolution Neural Network by Attention over Self-attention for Stance Detection in Twitter
    Zhou, Shengping
    Lin, Junjie
    Tan, Lianzhi
    Liu, Xin
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [13] Self-Attention and MLP Auxiliary Convolution for Face Anti-Spoofing
    Gu, Hanqing
    Chen, Jiayin
    Xiao, Fusu
    Zhang, Yi-Jia
    Lu, Zhe-Ming
    IEEE ACCESS, 2023, 11 : 131152 - 131167
  • [14] The Contrastive Network With Convolution and Self-Attention Mechanisms for Unsupervised Cell Segmentation
    Zhao, Yuhang
    Shao, Xianhao
    Chen, Cai
    Song, Junlin
    Tian, Chongxuan
    Li, Wei
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (12) : 5837 - 5847
  • [15] Self-Attention Graph Convolution Residual Network for Traffic Data Completion
    Zhang, Yong
    Wei, Xiulan
    Zhang, Xinyu
    Hu, Yongli
    Yin, Baocai
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (02) : 528 - 541
  • [16] Fusing Convolution and Self-Attention Parallel in Frequency Domain for Image Deblurring
    Huang, Xuandong
    He, JingSong
    NEURAL PROCESSING LETTERS, 2023, 55 (07) : 9811 - 9829
  • [17] Fusing Convolution and Self-Attention Parallel in Frequency Domain for Image Deblurring
    Xuandong Huang
    JingSong He
    Neural Processing Letters, 2023, 55 : 9811 - 9829
  • [18] Self-Attention and Dynamic Convolution Hybrid Model for Neural Machine Translation
    Zhang, Zhebin
    Wu, Sai
    Chen, Gang
    Jiang, Dawei
    11TH IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG 2020), 2020, : 352 - 359
  • [19] Dunhuang murals contour generation network based on convolution and self-attention fusion
    Liu, Baokai
    He, Fengjie
    Du, Shiqiang
    Zhang, Kaiwu
    Wang, Jianhua
    APPLIED INTELLIGENCE, 2023, 53 (19) : 22073 - 22085
  • [20] Cloudformer: A Cloud-Removal Network Combining Self-Attention Mechanism and Convolution
    Wu, Peiyang
    Pan, Zongxu
    Tang, Hairong
    Hu, Yuxin
    REMOTE SENSING, 2022, 14 (23)