A Progressive Gated Attention Model for Fine-Grained Visual Classification

被引:1
|
作者
Zhu, Qiangxi [1 ,2 ]
Li, Zhixin [1 ,2 ]
机构
[1] Guangxi Normal Univ, Key Lab Educ Blockchain & Intelligent Technol, Minist Educ, Guilin 541004, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multi Source Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature fusion; Channel attention; Gating mechanism; Spatial attention; Cross-layer features;
D O I
10.1109/ICME55011.2023.00353
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fine-grained image classification has been a hot research topic in computer vision, pattern recognition, and other fields in recent years. Most existing fine-grained methods use a single attention mechanism or multiple sub-networks to zoom in and find distinguishable local feature regions. These models seldom explore the intrinsic connections between cross-layer features with similar semantic features. This tends to show erratic performance in images with complex backgrounds. To this end, we propose a method focusing on cross-layer feature-context relations. In our method, propose a learning method based on feature channel weights and pyramid patterns, which can increase the diversity of global feature information. Second, we employ cross-layer attention, which can find key target regions. Finally, we propose a cross-gated attention mechanism that can find rich discriminative features from key regions of images. Experiments show that the proposed model performs well on three datasets: CUB-200-2011, Stanford Cars, and FGVC Aircraft.
引用
收藏
页码:2063 / 2068
页数:6
相关论文
共 50 条
  • [1] A collaborative gated attention network for fine-grained visual classification
    Zhu, Qiangxi
    Kuang, Wenlan
    Li, Zhixin
    [J]. DISPLAYS, 2023, 79
  • [2] Progressive Co-Attention Network for Fine-Grained Visual Classification
    Zhang, Tian
    Chang, Dongliang
    Ma, Zhanyu
    Guo, Jun
    [J]. 2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2021,
  • [3] An Erudite Fine-Grained Visual Classification Model
    Chang, Dongliang
    Tong, Yujun
    Du, Ruoyi
    Hospedales, Timothy
    Song, Yi-Zhe
    Ma, Zhanyu
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7268 - 7277
  • [4] Cross-layer progressive attention bilinear fusion method for fine-grained visual classification
    Wang, Chaoqing
    Qian, Yurong
    Gong, Weijun
    Cheng, Junjong
    Wang, Yongqiang
    Wang, Yuefei
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 82
  • [5] Learning Hierarchal Channel Attention for Fine-grained Visual Classification
    Guan, Xiang
    Wang, Guoqing
    Xu, Xing
    Bin, Yi
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5011 - 5019
  • [6] Hierarchical attention vision transformer for fine-grained visual classification
    Hu, Xiaobin
    Zhu, Shining
    Peng, Taile
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 91
  • [7] Diversified Visual Attention Networks for Fine-Grained Object Classification
    Zhao, Bo
    Wu, Xiao
    Feng, Jiashi
    Peng, Qiang
    Yan, Shuicheng
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (06) : 1245 - 1256
  • [8] Leveraging Fine-Grained Labels to Regularize Fine-Grained Visual Classification
    Wu, Junfeng
    Yao, Li
    Liu, Bin
    Ding, Zheyuan
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON COMPUTER MODELING AND SIMULATION (ICCMS 2019) AND 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND APPLICATIONS (ICICA 2019), 2019, : 133 - 136
  • [9] Progressive Erasing Network with consistency loss for fine-grained visual classification
    Peng, Jin
    Wang, Yongxiong
    Zhou, Zeping
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 87
  • [10] Multiscale Progressive Complementary Fusion Network for Fine-Grained Visual Classification
    Lei, Jingsheng
    Yang, Xinqi
    Yang, Shengying
    [J]. IEEE ACCESS, 2022, 10 : 62800 - 62810