On the Integration of Self-Attention and Convolution

被引:169
|
作者
Pan, Xuran [1 ]
Ge, Chunjiang [1 ]
Lu, Rui [1 ]
Song, Shiji [1 ]
Chen, Guanfu [2 ]
Huang, Zeyi [2 ]
Huang, Gao [1 ,3 ]
机构
[1] Tsinghua Univ, Dept Automat, BNRist, Beijing, Peoples R China
[2] Huawei Technol Ltd, Shenzhen, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00089
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolution and self-attention are two powerful techniques for representation learning, and they are usually considered as two peer approaches that are distinct from each other. In this paper, we show that there exists a strong underlying relation between them, in the sense that the bulk of computations of these two paradigms are in fact done with the same operation. Specifically, we first show that a traditional convolution with kernel size k x k can be decomposed into k(2) individual 1 x 1 convolutions, followed by shift and summation operations. Then, we interpret the projections of queries, keys, and values in self-attention module as multiple 1 x 1 convolutions, followed by the computation of attention weights and aggregation of the values. Therefore, the first stage of both two modules comprises the similar operation. More importantly, the first stage contributes a dominant computation complexity (square of the channel size) comparing to the second stage. This observation naturally leads to an elegant integration of these two seemingly distinct paradigms, i.e., a mixed model that enjoys the benefit of both self-Attention and Convolution (ACmix), while having minimum computational overhead compared to the pure convolution or self-attention counterpart. Extensive experiments show that our model achieves consistently improved results over competitive baselines on image recognition and downstream tasks. Code and pre-trained models will be released at https ://github.com/LeapLabTHU/ACmix and https://gitee.com/mindspore/models.
引用
收藏
页码:805 / 815
页数:11
相关论文
共 50 条
  • [31] Research on Combining Self-Attention and Convolution for Chest X-Ray Disease Classification
    Guan Xin
    Geng Jingjing
    Li Qiang
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (04)
  • [32] Multi-scale convolution networks for seismic event classification with windowed self-attention
    Yongming Huang
    Yi Xie
    Wei Liu
    Yongsheng Ma
    Fajun Miao
    Guobao Zhang
    Journal of Seismology, 2025, 29 (1) : 257 - 268
  • [33] Attention and self-attention in random forests
    Lev V. Utkin
    Andrei V. Konstantinov
    Stanislav R. Kirpichenko
    Progress in Artificial Intelligence, 2023, 12 : 257 - 273
  • [34] VITRANSPAD: VIDEO TRANSFORMER USING CONVOLUTION AND SELF-ATTENTION FOR FACE PRESENTATION ATTACK DETECTION
    Ming, Zuheng
    Yu, Zitong
    Al-Ghadi, Musab
    Visani, Muriel
    Luqman, Muhammad Muzzamil
    Burie, Jean-Christophe
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 4248 - 4252
  • [35] Self-Attention Graph Convolution Imputation Network for Spatio-Temporal Traffic Data
    Wei, Xiulan
    Zhang, Yong
    Wang, Shaofan
    Zhao, Xia
    Hu, Yongli
    Yin, Baocai
    IEEE Transactions on Intelligent Transportation Systems, 2024, 25 (12) : 19549 - 19562
  • [36] Mixing Self-Attention and Convolution: A Unified Framework for Multisource Remote Sensing Data Classification
    Li, Ke
    Wang, Di
    Wang, Xu
    Liu, Gang
    Wu, Zili
    Wang, Quan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [37] DEEPCHORUS: A HYBRID MODEL OF MULTI-SCALE CONVOLUTION AND SELF-ATTENTION FOR CHORUS DETECTION
    He, Qiqi
    Sun, Xiaoheng
    Yu, Yi
    Li, Wei
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 411 - 415
  • [38] Bidirectional Temporal Convolution with Self-Attention Network for CTC-Based Acoustic Modeling
    Sun, Jian
    Guo, Wu
    Gu, Bin
    Liu, Yao
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 1262 - 1266
  • [39] Ghost Module Based Residual Mixture of Self-Attention and Convolution for Online Signature Verification
    Luan, Fangjun
    Mu, Xuewen
    Yuan, Shuai
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (01): : 695 - 712
  • [40] Self-attention Based Multimodule Fusion Graph Convolution Network for Traffic Flow Prediction
    Li, Lijie
    Shao, Hongyang
    Chen, Junhao
    Wang, Ye
    DATA SCIENCE (ICPCSEE 2022), PT I, 2022, 1628 : 3 - 16