Sparse nonnegative tensor decomposition using proximal algorithm and inexact block coordinate descent scheme

被引:0
|
作者
Deqing Wang
Zheng Chang
Fengyu Cong
机构
[1] Dalian University of Technology,School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering
[2] University of Jyväskylä,Faculty of Information Technology
[3] University of Electronic Science and Technology of China,School of Computer Science and Engineering
[4] Dalian University of Technology,School of Artificial Intelligence, Faculty of Electronic Information and Electrical Engineering
[5] Dalian University of Technology,Key Laboratory of Integrated Circuit and Biomedical Electronic System, Liaoning Province
来源
关键词
Tensor decomposition; Nonnegative CANDECOMP/PARAFAC decomposition; Sparse regularization; Proximal algorithm; Inexact block coordinate descent;
D O I
暂无
中图分类号
学科分类号
摘要
Nonnegative tensor decomposition is a versatile tool for multiway data analysis, by which the extracted components are nonnegative and usually sparse. Nevertheless, the sparsity is only a side effect and cannot be explicitly controlled without additional regularization. In this paper, we investigated the nonnegative CANDECOMP/PARAFAC (NCP) decomposition with the sparse regularization item using l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document}-norm (sparse NCP). When high sparsity is imposed, the factor matrices will contain more zero components and will not be of full column rank. Thus, the sparse NCP is prone to rank deficiency, and the algorithms of sparse NCP may not converge. In this paper, we proposed a novel model of sparse NCP with the proximal algorithm. The subproblems in the new model are strongly convex in the block coordinate descent (BCD) framework. Therefore, the new sparse NCP provides a full column rank condition and guarantees to converge to a stationary point. In addition, we proposed an inexact BCD scheme for sparse NCP, where each subproblem is updated multiple times to speed up the computation. In order to prove the effectiveness and efficiency of the sparse NCP with the proximal algorithm, we employed two optimization algorithms to solve the model, including inexact alternating nonnegative quadratic programming and inexact hierarchical alternating least squares. We evaluated the proposed sparse NCP methods by experiments on synthetic, real-world, small-scale, and large-scale tensor data. The experimental results demonstrate that our proposed algorithms can efficiently impose sparsity on factor matrices, extract meaningful sparse components, and outperform state-of-the-art methods.
引用
收藏
页码:17369 / 17387
页数:18
相关论文
共 50 条
  • [31] Efficient Nonnegative Tensor Decomposition Using Alternating Direction Proximal Method of Multipliers
    Wang, Deqing
    Hu, Guoqiang
    [J]. CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (05) : 1308 - 1316
  • [32] Efficient Nonnegative Tensor Decomposition Using Alternating Direction Proximal Method of Multipliers
    Deqing WANG
    Guoqiang HU
    [J]. Chinese Journal of Electronics, 2024, 33 (05) : 1308 - 1316
  • [33] Analysis dictionary learning using block coordinate descent framework with proximal operators
    Li, Zhenni
    Ding, Shuxue
    Hayashi, Takafumi
    Li, Yujie
    [J]. NEUROCOMPUTING, 2017, 239 : 165 - 180
  • [34] DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization
    Gao, Tianxiang
    Chu, Chris
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2991 - 2998
  • [35] Local linear convergence of proximal coordinate descent algorithm
    Klopfenstein, Quentin
    Bertrand, Quentin
    Gramfort, Alexandre
    Salmon, Joseph
    Vaiter, Samuel
    [J]. OPTIMIZATION LETTERS, 2024, 18 (01) : 135 - 154
  • [36] Inexact Variable Metric Stochastic Block-Coordinate Descent for Regularized Optimization
    Ching-pei Lee
    Stephen J. Wright
    [J]. Journal of Optimization Theory and Applications, 2020, 185 : 151 - 187
  • [37] Inexact Variable Metric Stochastic Block-Coordinate Descent for Regularized Optimization
    Lee, Ching-pei
    Wright, Stephen J.
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2020, 185 (01) : 151 - 187
  • [38] Local linear convergence of proximal coordinate descent algorithm
    Quentin Klopfenstein
    Quentin Bertrand
    Alexandre Gramfort
    Joseph Salmon
    Samuel Vaiter
    [J]. Optimization Letters, 2024, 18 : 135 - 154
  • [39] A block coordinate descent approach for sparse principal component analysis
    Zhao, Qian
    Meng, Deyu
    Xu, Zongben
    Gao, Chenqiang
    [J]. NEUROCOMPUTING, 2015, 153 : 180 - 190
  • [40] Alternating proximal gradient method for sparse nonnegative Tucker decomposition
    Xu, Yangyang
    [J]. MATHEMATICAL PROGRAMMING COMPUTATION, 2015, 7 (01) : 39 - 70