Enhancing Classification with Hierarchical Scalable Query on Fusion Transformer

被引:2
|
作者
Sahoo, Sudeep Kumar [1 ]
Chalasani, Sathish [1 ]
Joshi, Abhishek [1 ]
Iyer, Kiran Nanjunda [1 ]
机构
[1] Samsung R&D Inst, Bangalore, India
关键词
D O I
10.1109/ICCE56470.2023.10043496
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Real-world vision based applications require finegrained classification for various applications of interest like e-commerce, mobile applications, warehouse management, etc. where reducing the severity of mistakes and improving the classification accuracy is of utmost importance. This paper proposes a method to boost fine-grained classification through a hierarchical approach via learnable independent query embeddings. This is achieved through a classification network that uses coarse class predictions to improve the fine class accuracy in a stage-wise sequential manner. We exploit the idea of hierarchy to learn query embeddings that are scalable across all levels, thus making this a relevant approach even for extreme classification where we have a large number of classes. The query is initialized with a weighted Eigen image calculated from training samples to best represent and capture the variance of the object. We introduce transformer blocks to fuse intermediate layers at which query attention happens to enhance the spatial representation of feature maps at different scales. This multi-scale fusion helps improve the accuracy of small-size objects. We propose a twofold approach for the unique representation of learnable queries. First, at each hierarchical level, we leverage cluster based loss that ensures maximum separation between inter-class query embeddings and helps learn a better (query) representation in higher dimensional spaces. Second, we fuse coarse level queries with finer level queries weighted by a learned scale factor. We additionally introduce a novel block called Cross Attention on Multi-level queries with Prior (CAMP) Block that helps reduce error propagation from coarse level to finer level, which is a common problem in all hierarchical classifiers. Our method is able to outperform the existing methods with an improvement of about 11% at the fine-grained classification.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Hierarchical Transformer for Scalable Graph Learning
    Zhu, Wenhao
    Wen, Tianyu
    Song, Guojie
    Ma, Xiaojun
    Wang, Liang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4702 - 4710
  • [2] Lead-Aware Hierarchical Transformer and Convolution Fusion Network for ECG Classification
    Zhang, Yuang
    Wang, Binyu
    Wang, Liping
    Huang, He
    WEB AND BIG DATA, APWEB-WAIM 2024, PT V, 2024, 14965 : 302 - 317
  • [3] Hierarchical Feature Fusion of Transformer With Patch Dilating for Remote Sensing Scene Classification
    Chen, Xiaoning
    Ma, Mingyang
    Li, Yong
    Mei, Shaohui
    Han, Zonghao
    Zhao, Jian
    Cheng, Wei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 16
  • [4] Hierarchical Transformer-based Query by Multiple Documents
    Huang, Zhiqi
    Naseri, Shahrzad
    Bonab, Hamed
    Sarwar, Sheikh Muhammad
    Allan, James
    PROCEEDINGS OF THE 2023 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2023, 2023, : 105 - 115
  • [5] RViT: Robust Fusion Vision Transformer with Variational Hierarchical Denoising Process for Image Classification
    Lin, Zhenghong
    Wu, Yuze
    Chen, Jiawei
    Wang, Shiping
    GUIDANCE NAVIGATION AND CONTROL, 2024, 04 (03)
  • [6] RViT: Robust Fusion Vision Transformer with Variational Hierarchical Denoising Process for Image Classification
    Zhenghong Lin
    Yuze Wu
    Jiawei Chen
    Shiping Wang
    Guidance,Navigation and Control, 2024, (03) : 191 - 217
  • [7] FeatureFlow Transformer: Enhancing Feature Fusion and Position Information Modeling for Hyperspectral Image Classification
    Qing, Qin
    Li, Xinwei
    Zhang, Li
    IEEE ACCESS, 2024, 12 : 127685 - 127701
  • [8] Enhancing Skin Lesion Classification: A Self-Attention Fusion Approach with Vision Transformer
    Heroza, Rahmat Izwan
    Gan, John Q.
    Raza, Haider
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, PT II, MIUA 2024, 2024, 14860 : 309 - 322
  • [9] HMTV: hierarchical multimodal transformer for video highlight query on baseball
    Zhang, Qiaoyun
    Chang, Chih-Yung
    Su, Ming-Yang
    Chang, Hsiang-Chuan
    Roy, Diptendu Sinha
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [10] HIERARCHICAL CLASSIFICATION FUSION FRAMEWORK
    Khan, Asmar A.
    Xydeas, Costas
    Ahmed, Hassan
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3432 - 3436