A weighted block cooperative sparse representation algorithm based on visual saliency dictionary

被引:4
|
作者
Chen, Rui [1 ]
Li, Fei [2 ]
Tong, Ying [1 ]
Wu, Minghu [3 ]
Jiao, Yang [4 ]
机构
[1] Nanjing Inst Technol, Coll Informat & Commun Engn, Nanjing 211167, Peoples R China
[2] Nanjing Inst Technol, Coll Elect Power Engn, Nanjing, Peoples R China
[3] Hubei Univ Technol, Coll Elect & Elect Engn, Wuhan, Peoples R China
[4] Univ Toronto, Dept Stat, Toronto, ON, Canada
基金
中国国家自然科学基金;
关键词
cooperative sparse representation; dictionary learning; face recognition; feature extraction; noise dictionary; visual saliency; FACE RECOGNITION; DENSITY;
D O I
10.1049/cit2.12090
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unconstrained face images are interfered by many factors such as illumination, posture, expression, occlusion, age, accessories and so on, resulting in the randomness of the noise pollution implied in the original samples. In order to improve the sample quality, a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary. First, the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary. Then, a block cooperation framework is presented to perform sparse coding for different local structures of human face, and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients. Finally, by synthesising the sparse representation results of all visual salient block dictionaries, the global coding residual is obtained and the class label is given. The experimental results on four databases, that is, AR, extended Yale B, LFW and PubFig, indicate that the combination of visual saliency dictionary, block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition.
引用
收藏
页码:235 / 246
页数:12
相关论文
共 50 条
  • [42] Attributed Scattering Center Extraction Algorithm Based on Sparse Representation With Dictionary Refinement
    Liu, Hongwei
    Jiu, Bo
    Li, Fei
    Wang, Yinghua
    IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2017, 65 (05) : 2604 - 2614
  • [43] RBDL: Robust block-Structured dictionary learning for block sparse representation
    Seghouane, Abd-Krim
    Iqbal, Asif
    Rekavandi, Aref Miri
    PATTERN RECOGNITION LETTERS, 2023, 172 : 89 - 96
  • [44] A novel traffic sign recognition algorithm based on sparse representation and dictionary learning
    Wang, Bin
    Kong, Bin
    Ding, Dawen
    Wang, Can
    Yang, Jing
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2017, 32 (05) : 3775 - 3784
  • [45] A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators
    Li, Zhenni
    Ding, Shuxue
    Li, Yujie
    NEURAL COMPUTATION, 2015, 27 (09) : 1951 - 1982
  • [46] Visual target tracking algorithm via multi-scale block and sparse representation
    Li, Ming
    Kong, Cui-Cui
    Nian, Fu-Zhong
    Wang, Lei
    International Journal of Signal Processing, Image Processing and Pattern Recognition, 2015, 8 (08) : 333 - 344
  • [47] Visual Tracking via Sparse Representation and Online Dictionary Learning
    Cheng, Xu
    Li, Nijun
    Zhou, Tongchi
    Zhou, Lin
    Wu, Zhenyang
    ACTIVITY MONITORING BY MULTIPLE DISTRIBUTED SENSING, 2014, 8703 : 87 - 103
  • [48] Visual tracking via sparse representation and online dictionary learning
    Wu, Zhenyang, 1600, Springer Verlag (8703):
  • [49] Saliency Detection by Superpixel-Based Sparse Representation
    Chen, Guangyao
    Chen, Zhenzhong
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 447 - 456
  • [50] Visual tracking based on sparse dense structure representation and online robust dictionary learning
    Yuan, Guang-Lin
    Xue, Mo-Gen
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2015, 37 (03): : 536 - 542