Blockwise coordinate descent schemes for efficient and effective dictionary learning

被引:10
|
作者
Liu, Bao-Di [1 ]
Wang, Yu-Xiong [2 ]
Shen, Bin [3 ]
Li, Xue [4 ]
Zhang, Yu-Jin [4 ]
Wang, Yan-Jiang [1 ]
机构
[1] China Univ Petr, Coll Informat & Control Engn, Qingdao 266580, Peoples R China
[2] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
[3] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
[4] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Dictionary learning; Coordinate descent; Sparse representation; Image classification; SPARSE; FACTORIZATION; ALGORITHM;
D O I
10.1016/j.neucom.2015.06.096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse representation based dictionary learning, which is usually viewed as a method for rearranging the structure of the original data in order to make the energy compact over non-orthogonal and over complete dictionary, is widely used in signal processing, pattern recognition, machine learning, statistics, and neuroscience. The current sparse representation framework decouples the optimization problem as two subproblems, i.e., alternate sparse coding and dictionary learning using different optimizers, treating elements in dictionary and codes separately. In this paper, we treat elements both in dictionary and codes homogenously. The original optimization is directly decoupled as several blockwise alternate subproblems rather than the above two. Hence, sparse coding and dictionary learning optimizations are unified together. More precisely, the variables involved in the optimization problem are partitioned into several suitable blocks with convexity preserved, making it possible to perform an exact blockwise coordinate descent. For each separable subproblem, based on the convexity and monotonic property of the parabolic function, a closed-form solution is obtained. The algorithm is thus simple, efficient, and effective. Experimental results show that our algorithm significantly accelerates the learning process. An application to image classification further demonstrates the efficiency of our proposed optimization strategy. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:25 / 35
页数:11
相关论文
共 50 条
  • [1] BLOCKWISE COORDINATE DESCENT SCHEMES FOR SPARSE REPRESENTATION
    Liu, Bao-Di
    Wang, Yu-Xiong
    Shen, Bin
    Zhang, Yu-Jin
    Wang, Yan-Jiang
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [2] The blockwise coordinate descent method for integer programs
    Sven Jäger
    Anita Schöbel
    [J]. Mathematical Methods of Operations Research, 2020, 91 : 357 - 381
  • [3] The blockwise coordinate descent method for integer programs
    Jager, Sven
    Schobel, Anita
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2020, 91 (02) : 357 - 381
  • [4] Efficient Dictionary Learning with Gradient Descent
    Gilboa, Dar
    Buchanan, Sam
    Wright, John
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Sparse Representation and Dictionary Learning Based on Alternating Parallel Coordinate Descent
    Tang, Zunyi
    Tamura, Toshiyo
    Ding, Shuxue
    Li, Zhenni
    [J]. 2013 INTERNATIONAL JOINT CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY & UBI-MEDIA COMPUTING (ICAST-UMEDIA), 2013, : 491 - +
  • [6] Analysis dictionary learning using block coordinate descent framework with proximal operators
    Li, Zhenni
    Ding, Shuxue
    Hayashi, Takafumi
    Li, Yujie
    [J]. NEUROCOMPUTING, 2017, 239 : 165 - 180
  • [7] Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent
    Tang, Zunyi
    Ding, Shuxue
    Li, Zhenni
    Jiang, Linlin
    [J]. ABSTRACT AND APPLIED ANALYSIS, 2013,
  • [8] Efficient Greedy Coordinate Descent for Composite Problems
    Karimireddy, Sai Praneeth
    Koloskova, Anastasia
    Stich, Sebastian U.
    Jaggi, Martin
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [9] Robust supervised learning with coordinate gradient descent
    Ibrahim Merad
    Stéphane Gaïffas
    [J]. Statistics and Computing, 2023, 33
  • [10] Robust supervised learning with coordinate gradient descent
    Merad, Ibrahim
    Gaiffas, Stephane
    [J]. STATISTICS AND COMPUTING, 2023, 33 (05)