FDAM: full-dimension attention module for deep convolutional neural networks

被引:3
|
作者
Cai, Silin [1 ]
Wang, Changping [1 ]
Ding, Jiajun [2 ,3 ]
Yu, Jun [2 ]
Fan, Jianping [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhuoyue Honor Coll, 2 Main St, Hangzhou 310018, Zhejiang, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, 2 Main St, Hangzhou 310018, Zhejiang, Peoples R China
[3] Hangzhou Dianzi Univ, Shangyu Inst Sci & Engn, Shangyu 312300, Zhejiang, Peoples R China
关键词
Attention mechanism; Convolutional neural network; Image classification; Object recognition; Elo rating mechanism;
D O I
10.1007/s13735-022-00248-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The attention mechanism is an important component of cross-modal research. It can improve the performance of convolutional neural networks by distinguishing the informative parts of the feature map from the useless ones. Various kinds of attention are proposed by recent studies. Different attentions use distinct division method to weight each part of the feature map. In this paper, we propose a full-dimension attention module, which is a lightweight, fully interactive 3-D attention mechanism. FDAM generates 3-D attention maps for both spatial and channel dimensions in parallel and then multiplies them to the feature map. It is difficult to obtain discriminative attention map cell under channel interaction at a low computational cost. Therefore, we adapt a generalized Elo rating mechanism to generate cell-level attention maps. We store historical information with a slight amount of non-training parameters to spread the computation over each training iteration. The proposed module can be seamlessly integrated into the end-to-end training of the CNN framework. Experiments demonstrate that it outperforms many existing attention mechanisms on different network structures and datasets for computer vision tasks, such as image classification and object detection.
引用
下载
收藏
页码:599 / 610
页数:12
相关论文
共 50 条
  • [11] A Simple and Light-Weight Attention Module for Convolutional Neural Networks
    Jongchan Park
    Sanghyun Woo
    Joon-Young Lee
    In So Kweon
    International Journal of Computer Vision, 2020, 128 : 783 - 798
  • [12] Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention
    Yang, Hua
    Yang, Ming
    He, Bitao
    Qin, Tao
    Yang, Jing
    ENTROPY, 2022, 24 (09)
  • [13] Semantic Face Segmentation Using Convolutional Neural Networks With a Supervised Attention Module
    Hizukuri, Akiyoshi
    Hirata, Yuto
    Nakayama, Ryohei
    IEEE ACCESS, 2023, 11 : 116892 - 116902
  • [14] SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks
    Yang, Lingxiao
    Zhang, Ru-Yuan
    Li, Lida
    Xie, Xiaohua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [15] Joint Beamforming and User Association Scheme for Full-Dimension Massive MIMO Networks
    Dong, Rui
    Li, Ang
    Hardjawana, Wibowo
    Li, Yonghui
    Ge, Xiaohu
    Vucetic, Branka
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (08) : 7733 - 7746
  • [16] EAN: An Efficient Attention Module Guided by Normalization for Deep Neural Networks
    Li, Jiafeng
    Li, Zelin
    Wen, Ying
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3100 - 3108
  • [17] SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Zhang, Qing-Long
    Yang, Yu-Bin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2235 - 2239
  • [18] BA-Net: Bridge Attention for Deep Convolutional Neural Networks
    Zhao, Yue
    Chen, Junzhou
    Zhang, Zirui
    Zhang, Ronghui
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 297 - 312
  • [19] An efficient attention module for 3d convolutional neural networks in action recognition
    Jiang, Guanghao
    Jiang, Xiaoyan
    Fang, Zhijun
    Chen, Shanshan
    APPLIED INTELLIGENCE, 2021, 51 (10) : 7043 - 7057
  • [20] CASCADED CONTEXT DEPENDENCY: AN EXTREMELY LIGHTWEIGHT MODULE FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Ma, Xu
    Qiao, Zhinan
    Guo, Jingda
    Tang, Sihai
    Chen, Qi
    Yang, Qing
    Fu, Song
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1741 - 1745