Improved channel attention methods via hierarchical pooling and reducing information loss

被引:5
|
作者
Zhu, Meng [1 ]
Min, Weidong [1 ,3 ,4 ]
Han, Junwei [2 ,3 ,4 ]
Han, Qing [1 ,3 ,4 ]
Cui, Shimiao [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Sch Software, Nanchang 330047, Peoples R China
[3] Nanchang Univ, Inst Metaverse, Nanchang 330031, Peoples R China
[4] Jiangxi Key Lab Smart City, Nanchang 330031, Peoples R China
关键词
Convolutional neural networks; Channel attention; Pooling; Reducing information loss; Information encoding;
D O I
10.1016/j.patcog.2023.110148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Channel attention has been demonstrated to improve performance of convolutional neural networks. Most existing channel attention methods lower channel dimension for reducing computational complexity. However, the dimension reduction causes information loss, thus resulting in performance loss. To alleviate the paradox of complexity and performance trade-off, we propose two novel channel attention methods named Grouping-Shuffle-Aggregation Channel Attention (GSACA) method and Mixed Encoding Channel Attention (MECA) method, respectively. Our GSACA method partitions channel variables into several groups and performs the independent matrix multiplication without the dimension reduction to each group. Our GSACA method enables interaction between all groups using a "channel shuffle" operator. After these, our GSACA method performs the independent matrix multiplication each group again and aggregates all channel correlations. Our MECA method encodes channel information through dual path architectures to benefit from both path topology where one uses the multilayer perception with dimension reduction to encode channel information and the other uses channel information encoding method without dimension reduction. Furthermore, a novel pooling operator named hierarchical pooling is presented and applied to our GSACA and MECA methods. The experimental results showed that our GSACA method almost consistently outperformed most existing channel attention methods and that our MECA method consistently outperformed the existing channel attention methods.
引用
收藏
页数:9
相关论文
共 20 条
  • [1] Pooling information across levels in hierarchical time series forecasting via Kernel methods
    Pablo Karmy, Juan
    Lopez, Julio
    Maldonado, Sebastian
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [2] Improved information pooling for hierarchical cognitive models through multiple and covaried regression
    R. Anders
    Z. Oravecz
    F.-X. Alario
    Behavior Research Methods, 2018, 50 : 989 - 1010
  • [3] Improved information pooling for hierarchical cognitive models through multiple and covaried regression
    Anders, R.
    Oravecz, Z.
    Alario, F. -X.
    BEHAVIOR RESEARCH METHODS, 2018, 50 (03) : 989 - 1010
  • [4] Unsupervised Hierarchical Graph Pooling via Substructure-Sensitive Mutual Information Maximization
    Liu, Ning
    Jian, Songlei
    Li, Dongsheng
    Xu, Hongzuo
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1299 - 1308
  • [5] Cyclostationarity-based methods for the extraction of the channel allocation information in a spectrum pooling system
    Öner, M
    Jondral, F
    RAWCON: 2004 IEEE RADIO AND WIRELESS CONFERENCE, PROCEEDINGS, 2004, : 279 - 282
  • [6] An Improved Biomedical Event Trigger Identification Framework via Modeling Document with Hierarchical Attention
    Zhang, Jinyong
    Fang, Dandan
    Zhao, Weizhong
    Yang, Jincai
    Zou, Wen
    Jiang, Xingpeng
    He, Tingting
    2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 583 - 589
  • [7] Reducing the loss of information and gaining accuracy with clustering methods in a global land-use model
    Dietrich, Jan Philipp
    Popp, Alexander
    Lotze-Campen, Hermann
    ECOLOGICAL MODELLING, 2013, 263 : 233 - 243
  • [8] An Approach to Reducing Information Loss and Achieving Diversity of Sensitive Attributes in k-anonymity Methods
    Yoo, Sunyong
    Shin, Moonshik
    Lee, Doheon
    INTERACTIVE JOURNAL OF MEDICAL RESEARCH, 2012, 1 (02): : 207 - 217
  • [9] EMPPNet: Enhancing Molecular Property Prediction via Cross-modal Information Flow and Hierarchical Attention
    Zheng, Zixi
    Wang, Hong
    Tan, Yanyan
    Liang, Cheng
    Sun, Yanshen
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 234
  • [10] Capturing User and Product Information for Sentiment Classification via Hierarchical Separated Attention and Neural Collaborative Filtering
    Yan, Minghui
    Wang, Changjian
    Sha, Ying
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: TEXT AND TIME SERIES, PT IV, 2019, 11730 : 104 - 116