Multi-Level Cascade Sparse Representation Learning for Small Data Classification

被引:0
|
作者
Zhong, Wenyuan [1 ]
Li, Huaxiong [1 ]
Hu, Qinghua [2 ,3 ]
Gao, Yang [4 ]
Chen, Chunlin [1 ]
机构
[1] Nanjing Univ, Dept Control Sci & Intelligence Engn, Nanjing 210093, Peoples R China
[2] Tianjin Univ, Sch Comp Sci & Technol, Tianjin 300072, Peoples R China
[3] Tianjin Univ, Tianjin Key Lab Cognit Comp & Applicat, Tianjin 300072, Peoples R China
[4] Nanjing Univ, Dept Comp Sci & Technol, Natl Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep cascade; sparse representation; face recognition; small data; FACE RECOGNITION; ILLUMINATION; REGRESSION; MODELS;
D O I
10.1109/TCSVT.2022.3222226
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning (DL) methods have recently captured much attention for image classification. However, such methods may lead to a suboptimal solution for small-scale data since the lack of training samples. Sparse representation stands out with its efficiency and interpretability, but its precision is not so competitive. We develop a Multi-Level Cascade Sparse Representation (ML-CSR) learning method to combine both advantages when processing small-scale data. ML-CSR is proposed using a pyramid structure to expand the training data size. It adopts two core modules, the Error-To-Feature (ETF) module, and the Generate-Adaptive-Weight (GAW) module, to further improve the precision. ML-CSR calculates the inter-layer differences by the ETF module to increase the diversity of samples and obtains adaptive weights based on the layer accuracy in the GAW module. This helps ML-CSR learn more discriminative features. State-of-the-art results on the benchmark face databases validate the effectiveness of the proposed ML-CSR. Ablation experiments demonstrate that the proposed pyramid structure, ETF, and GAW module can improve the performance of ML-CSR. The code is available at https://github.com/Zhongwenyuan98/ML-CSR.
引用
收藏
页码:2451 / 2464
页数:14
相关论文
共 50 条
  • [1] A traget tracking method combining multi-level sparse representation and metric learning
    Peng, Meng
    Cai, Zi-Xing
    Chen, Bai-Fan
    Kongzhi yu Juece/Control and Decision, 2015, 30 (10): : 1791 - 1796
  • [2] Multi-level Semantic Representation for Flower Classification
    Lin, Chuang
    Yao, Hongxun
    Yu, Wei
    Tang, Wenbo
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 : 325 - 335
  • [3] NESTED LEARNING FOR MULTI-LEVEL CLASSIFICATION
    Achddou, Raphael
    Di Martino, J. Matias
    Sapiro, Guillermo
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2815 - 2819
  • [4] Remote Sensing Image Scene Classification via Multi-Level Representation Learning
    Fu, Wei
    Yang, Lishuang
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2942 - 2948
  • [5] NOTE ON CLASSIFICATION OF MULTI-LEVEL DATA
    LANCE, GN
    WILLIAMS, WT
    COMPUTER JOURNAL, 1967, 9 (04): : 381 - &
  • [6] HIERARCHICAL IMAGE REPRESENTATION VIA MULTI-LEVEL SPARSE CODING
    Lu, Keyu
    Li, Jian
    An, Xiangjing
    He, Hangen
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 4902 - 4906
  • [7] Cascade ensemble learning for multi-level reliability evaluation
    Song, Lu-Kai
    Li, Xue-Qin
    Choy, Yat-Sze
    Zhu, Shun -Peng
    AEROSPACE SCIENCE AND TECHNOLOGY, 2024, 148
  • [8] Employing deep learning and sparse representation for data classification
    Fard, Seyed Mehdi Hazrati
    Hashemi, Sattar
    2017 19TH CSI INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND SIGNAL PROCESSING (AISP), 2017, : 289 - 293
  • [9] Multi-Level Symbolic Regression: Function Structure Learning for Multi-Level Data
    Sen Fong, Kei
    Motani, Mehul
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [10] Multi-Level Representation Learning for Deep Subspace Clustering
    Kheirandishfard, Mohsen
    Zohrizadeh, Fariba
    Kamangar, Farhad
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2028 - 2037