Spatial Bias for attention-free non-local neural networks

被引:3
|
作者
Go, Junhyung [1 ]
Ryu, Jonngbin [1 ,2 ]
机构
[1] Ajou Univ, Dept Artificial Intelligence, Suwon 16499, South Korea
[2] Ajou Univ, Dept Software & Comp Engn, Suwon 16499, South Korea
基金
新加坡国家研究基金会;
关键词
Non-local operation; Long-range dependency; Spatial Bias; Global context; Image classification; Convolutional neural networks;
D O I
10.1016/j.eswa.2023.122053
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we introduce the Spatial Bias to learn global knowledge without self-attention in convolutional neural networks. Owing to the limited receptive field, conventional convolutional neural networks suffer from learning long-range dependencies. Non-local neural networks have struggled to learn global knowledge, but unavoidably have too heavy a network design due to the self-attention operation. Therefore, we propose a fast and lightweight Spatial Bias that efficiently encodes global knowledge without self-attention on convolutional neural networks. Spatial Bias is stacked on the feature map and convolved together to adjust the spatial structure of the convolutional features. Because we only use the convolution operation in this process, ours is lighter and faster than traditional methods based on the heavy self-attention operation. Therefore, we learn the global knowledge on the convolution layer directly with very few additional resources. Our method is very fast and lightweight due to the attention-free non-local method while improving the performance of neural networks considerably. Compared to non-local neural networks, the Spatial Bias use about x10 times fewer parameters while achieving comparable performance with 1.6 similar to 3.3 times more throughput on a very little budget. Furthermore, the Spatial Bias can be used with conventional non-local neural networks to further improve the performance of the backbone model. We show that the Spatial Bias achieves competitive performance that improves the classification accuracy by +0.79% and +1.5% on ImageNet-1K and CIFAR-100 datasets. Additionally, we validate our method on the MS-COCO and ADE20K datasets for downstream tasks involving object detection and semantic segmentation.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Non-local Andreev reflection under ac bias
    Golubev, D. S.
    Zaikin, A. D.
    EPL, 2009, 86 (03)
  • [42] Spatial attention in asynchronous neural networks
    VanRullen, Rufin
    Thorpe, Simon J.
    Neurocomputing, 1999, 26-27 : 911 - 918
  • [43] Spatial attention in asynchronous neural networks
    VanRullen, R
    Thorpe, SJ
    COMPUTATIONA L NEUROSCIENCE: TRENDS IN RESEARCH 1999, 1999, : 911 - 918
  • [44] Spatial attention in asynchronous neural networks
    VanRullen, R
    Thorpe, SJ
    NEUROCOMPUTING, 1999, 26-7 : 911 - 918
  • [45] Cross Spectral and Spatial Scale Non-local Attention-Based Unsupervised Pansharpening Network
    Li, Shuangliang
    Tian, Yugang
    Wang, Cheng
    Wu, Hongxian
    Zheng, Shaolan
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 4858 - 4870
  • [46] Spatial Mixing and Non-local Markov chains
    Blanca, Antonio
    Caputo, Pietro
    Sinclair, Alistair
    Vigoda, Eric
    SODA'18: PROCEEDINGS OF THE TWENTY-NINTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2018, : 1965 - 1980
  • [47] NON-LOCAL SPATIOTEMPORAL CORRELATION ATTENTION FOR ACTION RECOGNITION
    Ha, Manh-Hung
    Chen, Oscal Tzyh-Chiang
    2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [48] An anisotropic non-local attention network for image segmentation
    Feiniu Yuan
    Yaowen Zhu
    Kang Li
    Zhijun Fang
    Jinting Shi
    Machine Vision and Applications, 2022, 33
  • [49] An anisotropic non-local attention network for image segmentation
    Yuan, Feiniu
    Zhu, Yaowen
    Li, Kang
    Fang, Zhijun
    Shi, Jinting
    MACHINE VISION AND APPLICATIONS, 2022, 33 (02)
  • [50] Local and non-local deficits in amblyopia: acuity and spatial interactions
    Bonneh, YS
    Sagi, D
    Polat, U
    VISION RESEARCH, 2004, 44 (27) : 3099 - 3110