Large-scale Sparse Tensor Decomposition Using a Damped Gauss-Newton Method

被引:4
|
作者
Ranadive, Teresa M. [1 ]
Baskaran, Muthu M. [2 ]
机构
[1] Lab Phys Sci, College Pk, MD 20740 USA
[2] Reservoir Labs Inc, New York, NY 10012 USA
关键词
Big data analytics; high performance computing; damped Gauss-Newton; sparse tensor decomposition; LINE SEARCH; ALGORITHMS;
D O I
10.1109/hpec43674.2020.9286202
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
CANDECOMP/PARAFAC (CP) tensor decomposition is a popular unsupervised machine learning method with numerous applications. This process involves modeling a high-dimensional, multi-modal array (a tensor) as the sum of several low-dimensional components. In order to decompose a tensor, one must solve an optimization problem, whose objective is often given by the sum of the squares of the tensor and decomposition model entry differences. One algorithm occasionally utilized to solve such problems is CP-OPT-DGN, a damped Gauss-Newton all-at-once optimization method for CP tensor decomposition. However, there are currently no published results that consider the decomposition of large-scale (with up to billions of non-zeros), sparse tensors using this algorithm. This work considers the decomposition of large-scale tensors using an efficiently implemented CP-OPT-DGN method. It is observed that CP-OPT-DGN significantly outperforms CP-ALS (CP-Alternating Least Squares) and CP-OPT-QNR (a quasi-Newton-Raphson all-at-once optimization method for CP tensor decomposition), two other widely used tensor decomposition algorithms, in terms of accuracy and latent behavior detection.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] FAST DAMPED GAUSS-NEWTON ALGORITHM FOR SPARSE AND NONNEGATIVE TENSOR FACTORIZATION
    Anh Huy Phan
    Tichavsky, Petr
    Cichocki, Andrzej
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 1988 - 1991
  • [2] DAMPED GAUSS-NEWTON ALGORITHM FOR NONNEGATIVE TUCKER DECOMPOSITION
    Anh Huy Phan
    Tichavsky, Petr
    Cichocki, Andrzej
    [J]. 2011 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2011, : 665 - 668
  • [3] Large-Scale Inversion of Magnetotelluric Data Using Regularized Gauss-Newton Method in the Data Space
    Nadasi, Endre
    Gribenko, Alexander, V
    Zhdanov, Michael S.
    [J]. PURE AND APPLIED GEOPHYSICS, 2022, 179 (10) : 3785 - 3806
  • [4] A FURTHER IMPROVEMENT OF A FAST DAMPED GAUSS-NEWTON ALGORITHM FOR CANDECOMP-PARAFAC TENSOR DECOMPOSITION
    Tichavsky, Petr
    Anh Huy Phan
    Cichocki, Andrzej
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 5964 - 5968
  • [5] A truncated nonmonotone Gauss-Newton method for large-scale nonlinear least-squares problems
    Fasano, G
    Lampariello, F
    Sciandrone, M
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2006, 34 (03) : 343 - 358
  • [6] A Truncated Nonmonotone Gauss-Newton Method for Large-Scale Nonlinear Least-Squares Problems
    G. Fasano
    F. Lampariello
    M. Sciandrone
    [J]. Computational Optimization and Applications, 2006, 34 : 343 - 358
  • [7] A nonmonotone damped Gauss-Newton method for nonlinear complementarity problems
    Dong, Li
    [J]. ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS, 2023, (49): : 206 - 215
  • [8] On the Gauss-Newton method
    Argyros I.K.
    Hilout S.
    [J]. Journal of Applied Mathematics and Computing, 2011, 35 (1-2) : 537 - 550
  • [9] Gauss-Newton method
    Wang, Yong
    [J]. WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2012, 4 (04) : 415 - 420
  • [10] The convergence of a smoothing damped Gauss-Newton method for nonlinear complementarity problem
    Ma, Changfeng
    Jiang, Lihua
    Wang, Desheng
    [J]. NONLINEAR ANALYSIS-REAL WORLD APPLICATIONS, 2009, 10 (04) : 2072 - 2087