Learning tensor networks with tensor cross interpolation: New algorithms and libraries

被引:0
|
作者
Fernandez, Yuriel Nunez [1 ,2 ]
Ritter, Marc K. [3 ,4 ]
Jeannin, Matthieu [2 ]
Li, Jheng-Wei [2 ]
Kloss, Thomas [1 ]
Louvet, Thibaud [2 ]
Terasaki, Satoshi [6 ]
Parcollet, Olivier [5 ,7 ]
von Delft, Jan [3 ,4 ]
Shinaoka, Hiroshi [8 ]
Waintal, Xavier [2 ]
机构
[1] Univ Grenoble Alpes, CNRS, Neel Inst, F-38000 Grenoble, France
[2] Univ Grenoble Alpes, CEA, Grenoble INP, IRIG,Pheliqs, F-38000 Grenoble, France
[3] Ludwig Maximilians Univ Munchen, Arnold Sommerfeld Ctr Theoret Phys, Ctr Nanosci, D-80333 Munich, Germany
[4] Ludwig Maximilians Univ Munchen, Munich Ctr Quantum Sci & Technol, D-80333 Munich, Germany
[5] Flatiron Inst, Ctr Computat Quantum Phys, 162 5th Ave, New York, NY 10010 USA
[6] AtelierArith, Sendai, Miyagi 9800004, Japan
[7] Univ Paris Saclay, CNRS, CEA, Inst Phys theor, F-91191 Gif Sur Yvette, France
[8] Saitama Univ, Dept Phys, Saitama 3388570, Japan
来源
SCIPOST PHYSICS | 2025年 / 18卷 / 03期
关键词
SCHUR COMPLEMENT; APPROXIMATION; MATRIX; QUASIOPTIMALITY;
D O I
10.21468/SciPostPhys.18.3.104
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The tensor cross interpolation (TCI) algorithm is a rank-revealing algorithm for decomposing low-rank, high-dimensional tensors into tensor trains/matrix product states (MPS). TCI learns a compact MPS representation of the entire object from a tiny training data set. Once obtained, the large existing MPS toolbox provides exponentially fast algorithms for performing a large set of operations. We discuss several improvements and variants of TCI. In particular, we show that replacing the cross interpolation by the partially rank-revealing LU decomposition yields a more stable and more flexible algorithm than the original algorithm. We also present two open source libraries, xfac in Python/C++ and TensorCrossInterpolation.jl in Julia, that implement these improved algorithms, and illustrate them on several applications. These include sign- problem-free integration in large dimension, the "superhigh-resolution" quantics representation of functions, the solution of partial differential equations, the superfast Fourier transform, the computation of partition functions, and the construction of matrix product operators.
引用
收藏
页数:74
相关论文
共 50 条
  • [41] Algorithms for an Efficient Tensor Biclustering
    Andriantsiory, Dina Faneva
    Lebbah, Mustapha
    Azzag, Hanane
    Beck, Gael
    TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING: PAKDD 2019 WORKSHOPS, 2019, 11607 : 130 - 138
  • [42] Algorithms for tensor network renormalization
    Evenbly, G.
    PHYSICAL REVIEW B, 2017, 95 (04)
  • [43] TENSOR ANALYSIS OF NETWORKS
    KU, YH
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 1965, 280 (02): : 175 - &
  • [44] Tensor Regression Networks
    Kossaifi, Jean
    Lipton, Zachary C.
    Kolbeinsson, Arinbjorn
    Khanna, Aran
    Furlanello, Tommaso
    Anandkumar, Anima
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [45] Tensor Switching Networks
    Tsai, Chuan-Yung
    Saxe, Andrew
    Cox, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [46] TENSOR ANALYSIS OF NETWORKS
    GIBBS, WJ
    NATURE, 1965, 207 (5002) : 1118 - &
  • [47] Tensor regression networks
    Kossaifi, Jean
    Lipton, Zachary C.
    Kolbeinsson, Arinbjorn
    Khanna, Aran
    Furlanello, Tommaso
    Anandkumar, Anima
    1600, Microtome Publishing (21):
  • [48] Comb tensor networks
    Chepiga, Natalia
    White, Steven R.
    PHYSICAL REVIEW B, 2019, 99 (23)
  • [49] Logic Tensor Networks
    Badreddine, Samy
    Garcez, Artur d'Avila
    Serafini, Luciano
    Spranger, Michael
    ARTIFICIAL INTELLIGENCE, 2022, 303
  • [50] Randomized Algorithms for Computing the Generalized Tensor SVD Based on the Tensor Product
    Ahmadi-Asl, Salman
    Rezaeian, Naeim
    Ugwu, Ugochukwu O.
    COMMUNICATIONS ON APPLIED MATHEMATICS AND COMPUTATION, 2025,