Block Decomposition for Very Large-Scale Nonnegative Tensor Factorization

被引:0
|
作者
Anh Huy Phan [1 ]
Cichocki, Andrzej [1 ]
机构
[1] RIKEN, Lab Adv Brain Signal Proc, Brain Sci Inst, Wako, Saitama 3510198, Japan
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Nonnegative parallel factor analysis (PARAFAC) (also called nonnegative tensor factorization - NTF) allows to find nonnegative factors hidden under the raw tensor data which have many potential applications in neuroscience, bioinformatics, chemometrics etc [1], [2]. NTF algorithms can be easily established based on the unfolding tensor and 1thatri-Rao products of factors [1], [3]. This kind of algorithms leads to large matrices, and requires large memory for temporal variables. Hence decomposition of large-scale tensor is still a challenging problem for NTF. To deal with this problem, a new tensor factorization scheme is proposed, in which the data tensor will be divided into a grid of multiple of small-sized subtensors, then processed in two stages: PARAFAC for the subtensors, and construction of full factors for the whole data. The two new algorithms compute Hadamard products, and perform on relatively small matrices. Therefore they are extremely fast in comparison with all the existing NTF algorithms. Extensive experiments confirm the validity, high performance and high speed of the developed algorithms.
引用
收藏
页码:316 / 319
页数:4
相关论文
共 50 条
  • [1] Block Decomposition for Very Large-Scale Nonnegative Tensor Factorization
    Phan, Anh Huy
    Cichocki, Andrzej
    [J]. 2009 3RD IEEE INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2009, : 316 - 319
  • [2] Fast Nonnegative Tensor Factorization for Very Large-Scale Problems Using Two-Stage Procedure
    Phan, Anh Huy
    Cichocki, Andrzej
    [J]. 2009 3RD IEEE INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2009, : 297 - 300
  • [3] Fast Nonnegative Tensor Factorization for Very Large-Scale Problems Using Two-Stage Procedure
    Phan, Anh Huy
    Cichocki, Andrzej
    [J]. 2009 3RD IEEE INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP 2009), 2009, : 297 - 300
  • [4] Large-scale tucker Tensor factorization for sparse and accurate decomposition
    Jun-Gi Jang
    Moonjeong Park
    Jongwuk Lee
    Lee Sael
    [J]. The Journal of Supercomputing, 2022, 78 : 17992 - 18022
  • [5] Large-scale tucker Tensor factorization for sparse and accurate decomposition
    Jang, Jun-Gi
    Park, Moonjeong
    Lee, Jongwuk
    Sael, Lee
    [J]. JOURNAL OF SUPERCOMPUTING, 2022, 78 (16): : 17992 - 18022
  • [6] NESTEROV-BASED PARALLEL ALGORITHM FOR LARGE-SCALE NONNEGATIVE TENSOR FACTORIZATION
    Liavas, A. P.
    Kostoulas, G.
    Lourakis, G.
    Huang, K.
    Sidiropoulos, N. D.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5895 - 5899
  • [7] DISTRIBUTED LARGE-SCALE TENSOR DECOMPOSITION
    de Almeida, Andre L. F.
    Kibangou, Alain Y.
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [8] Fast Tucker Factorization for Large-Scale Tensor Completion
    Lee, Dongha
    Lee, Jaehyung
    Yu, Hwanjo
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 1098 - 1103
  • [9] Accelerating block coordinate descent for nonnegative tensor factorization
    Ang, Andersen Man Shun
    Cohen, Jeremy E.
    Gillis, Nicolas.
    Hien, Le Thi Khanh
    [J]. NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2021, 28 (05)
  • [10] Stochastic Gradients for Large-Scale Tensor Decomposition
    Kolda, Tamara G.
    Hong, David
    [J]. SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2020, 2 (04): : 1066 - 1095