Structured dictionary learning using mixed-norms and group-sparsity constraint

被引:1
|
作者
Ataee, Zivar [1 ]
Mohseni, Hadis [1 ]
机构
[1] Shahid Bahonar Univ Kerman, Dept Comp Engn, Kerman, Iran
来源
VISUAL COMPUTER | 2020年 / 36卷 / 08期
关键词
Supervised dictionary learning; Sparse representation; Structured sparsity; Mixed norms; Classification; DISCRIMINATIVE DICTIONARY; K-SVD; FACE RECOGNITION; REPRESENTATION;
D O I
10.1007/s00371-019-01766-8
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recently, sparse representation and dictionary learning have shown significant performance in machine vision. In particular, several supervised dictionary learning methods have been proposed for classification aim and increasing its accuracy. Among them, structured dictionary learning is an interesting approach which captures the discriminative properties of each class and common features among all classes in class-specific sub-dictionaries and a distinct shared sub-dictionary, respectively. It extracts the structural information that exists in samples of each class to increase the classification accuracy. Therefore, in this paper, a group-based structured dictionary learning method is proposed that captures structural information in each class and learns class-specific and shared sub-dictionaries based on mixed l2,1 norm. Also, mixed l2,1 norm is used for acquiring the sparse coefficients of data samples based on the learned sub-dictionaries. Then, classification is done by finding the class with (1) minimum reconstruction error or (2) maximum number of nonzero groups based on l1,0 norm. The proposed method is evaluated by conducting experiments on Extended YaleB, AR and CMU-PIE face databases and the USPS handwritten digits database. The experimental results demonstrate the effectiveness of the proposed method in data representation and classification.
引用
收藏
页码:1679 / 1692
页数:14
相关论文
共 50 条
  • [1] Structured dictionary learning using mixed-norms and group-sparsity constraint
    Zivar Ataee
    Hadis Mohseni
    [J]. The Visual Computer, 2020, 36 : 1679 - 1692
  • [2] Discriminative structured dictionary learning with hierarchical group sparsity
    Xu, Yong
    Sun, Yuping
    Quan, Yuhui
    Zheng, Bo
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2015, 136 : 59 - 68
  • [3] Sparse angle CT reconstruction with weighted dictionary learning algorithm based on adaptive group-sparsity regularization
    Yang, Tiejun
    Tang, Lu
    Tang, Qi
    Li, Lei
    [J]. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY, 2021, 29 (03) : 435 - 452
  • [4] Group-Sparsity Learning Approach for Bearing Fault Diagnosis
    Dai, Jisheng
    So, Hing Cheung
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (07) : 4566 - 4576
  • [5] Unbiased Group-Sparsity Sensing Using Quadratic Envelopes
    Carlsson, Marcus
    Tourneret, Jean-Yves
    Wendt, Herwig
    [J]. 2019 IEEE 8TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP 2019), 2019, : 425 - 429
  • [6] Denoising with weak signal preservation by group-sparsity transform learning
    Wang, Xiaojing
    Wen, Bihan
    Ma, Jianwei
    [J]. GEOPHYSICS, 2019, 84 (06) : V351 - V368
  • [7] Identification of dynamic forces using group-sparsity in frequency domain
    Rezayat, A.
    Nassiri, V.
    De Pauw, B.
    Ertveldt, J.
    Vanlanduit, S.
    Guillaume, P.
    [J]. MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2016, 70-71 : 756 - 768
  • [8] Discriminative group-sparsity constrained broad learning system for visual recognition
    Jin, Junwei
    Li, Yanting
    Yang, Tiejun
    Zhao, Liang
    Duan, Junwei
    Chen, C. L. Philip
    [J]. INFORMATION SCIENCES, 2021, 576 : 800 - 818
  • [9] Learning dictionary from signals under global sparsity constraint
    Meng, Deyu
    Zhao, Qian
    Leung, Yee
    Xu, Zongben
    [J]. NEUROCOMPUTING, 2013, 119 : 308 - 318
  • [10] Multi-view discriminative and structured dictionary learning with group sparsity for human action recognition
    Gao, Z.
    Zhang, H.
    Xu, G. P.
    Xue, Y. B.
    Hauptmann, A. G.
    [J]. SIGNAL PROCESSING, 2015, 112 : 83 - 97