FDAM: full-dimension attention module for deep convolutional neural networks

被引:3
|
作者
Cai, Silin [1 ]
Wang, Changping [1 ]
Ding, Jiajun [2 ,3 ]
Yu, Jun [2 ]
Fan, Jianping [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhuoyue Honor Coll, 2 Main St, Hangzhou 310018, Zhejiang, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, 2 Main St, Hangzhou 310018, Zhejiang, Peoples R China
[3] Hangzhou Dianzi Univ, Shangyu Inst Sci & Engn, Shangyu 312300, Zhejiang, Peoples R China
关键词
Attention mechanism; Convolutional neural network; Image classification; Object recognition; Elo rating mechanism;
D O I
10.1007/s13735-022-00248-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The attention mechanism is an important component of cross-modal research. It can improve the performance of convolutional neural networks by distinguishing the informative parts of the feature map from the useless ones. Various kinds of attention are proposed by recent studies. Different attentions use distinct division method to weight each part of the feature map. In this paper, we propose a full-dimension attention module, which is a lightweight, fully interactive 3-D attention mechanism. FDAM generates 3-D attention maps for both spatial and channel dimensions in parallel and then multiplies them to the feature map. It is difficult to obtain discriminative attention map cell under channel interaction at a low computational cost. Therefore, we adapt a generalized Elo rating mechanism to generate cell-level attention maps. We store historical information with a slight amount of non-training parameters to spread the computation over each training iteration. The proposed module can be seamlessly integrated into the end-to-end training of the CNN framework. Experiments demonstrate that it outperforms many existing attention mechanisms on different network structures and datasets for computer vision tasks, such as image classification and object detection.
引用
下载
收藏
页码:599 / 610
页数:12
相关论文
共 50 条
  • [1] FDAM: full-dimension attention module for deep convolutional neural networks
    Silin Cai
    Changping Wang
    Jiajun Ding
    Jun Yu
    Jianping Fan
    International Journal of Multimedia Information Retrieval, 2022, 11 : 599 - 610
  • [2] An Attention Module for Convolutional Neural Networks
    Zhu, Baozhou
    Hofstee, Peter
    Lee, Jinho
    Al-Ars, Zaid
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 167 - 178
  • [3] HAM: Hybrid attention module in deep convolutional neural networks for image classification
    Li, Guoqiang
    Fang, Qi
    Zha, Linlin
    Gao, Xin
    Zheng, Nenggan
    PATTERN RECOGNITION, 2022, 129
  • [4] SPAM: Spatially Partitioned Attention Module in Deep Convolutional Neural Networks for Image Classification
    Wang F.
    Qiao R.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2023, 57 (09): : 185 - 192
  • [5] Context extraction module for deep convolutional neural networks
    Singh, Pravendra
    Mazumder, Pratik
    Namboodiri, Vinay P.
    PATTERN RECOGNITION, 2022, 122
  • [6] A New Cyclic Spatial Attention Module for Convolutional Neural Networks
    Li Daihui
    Zeng Shangyou
    Li Wenhui
    Yang Lei
    2019 IEEE 11TH INTERNATIONAL CONFERENCE ON COMMUNICATION SOFTWARE AND NETWORKS (ICCSN 2019), 2019, : 607 - 611
  • [7] Spatial Channel Attention for Deep Convolutional Neural Networks
    Liu, Tonglai
    Luo, Ronghai
    Xu, Longqin
    Feng, Dachun
    Cao, Liang
    Liu, Shuangyin
    Guo, Jianjun
    MATHEMATICS, 2022, 10 (10)
  • [8] Spatial Pyramid Attention for Deep Convolutional Neural Networks
    Ma, Xu
    Guo, Jingda
    Sansom, Andrew
    McGuire, Mara
    Kalaani, Andrew
    Chen, Qi
    Tang, Sihai
    Yang, Qing
    Fu, Song
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 3048 - 3058
  • [9] Deep Convolutional Neural Network With Attention Module for Seismic Impedance Inversion
    Dodda, Vineela Chandra
    Kuruguntla, Lakshmi
    Mandpura, Anup Kumar
    Elumalai, Karthikeyan
    Sen, Mrinal K.
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 8076 - 8086
  • [10] A Simple and Light-Weight Attention Module for Convolutional Neural Networks
    Park, Jongchan
    Woo, Sanghyun
    Lee, Joon-Young
    Kweon, In So
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (04) : 783 - 798