Schatten Norms in Matrix Streams: Hello Sparsity, Goodbye Dimension

被引:0
|
作者
Braverman, Vladimir [1 ]
Krauthgamer, Robert [2 ]
Krishnan, Aditya [1 ]
Sinoff, Roi [2 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
[2] Weizmann Inst Sci, Dept Comp Sci & Appl Math, Rehovot, Israel
基金
以色列科学基金会;
关键词
LOG-DETERMINANT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spectral functions of large matrices contain important structural information about the underlying data, and are thus becoming increasingly important. Many times, large matrices representing real-world data are sparse or doubly sparse (i.e., sparse in both rows and columns), and are accessed as a stream of updates, typically organized in row-order. In this setting, where space (memory) is the limiting resource, all known algorithms require space that is polynomial in the dimension of the matrix, even for sparse matrices. We address this challenge by providing the first algorithm whose space requirement is independent of the matrix dimension, assuming the matrix is doubly-sparse and presented in row-order. Our algorithms approximate the Schatten p-norms, which we use in turn to approximate other spectral functions, such as logarithm of the determinant, trace of matrix inverse, and Estrada index. We validate these theoretical performance bounds by numerical experiments on real-world matrices representing social networks. We further prove that multiple passes are unavoidable in this setting, and show extensions of our primary technique, including a trade-off between space requirements and number of passes.
引用
收藏
页数:11
相关论文
共 9 条
  • [1] Schatten Norms in Matrix Streams: Hello Sparsity, Goodbye Dimension
    Braverman, Vladimir
    Krauthgamer, Robert
    Krishnan, Aditya
    Sinoff, Roi
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [2] Smoothness of Schatten norms and sliding-window matrix streams
    Krauthgamer, Robert
    Sapir, Shay
    [J]. INFORMATION PROCESSING LETTERS, 2022, 177
  • [3] Duality Mapping for Schatten Matrix Norms
    Aziznejad, Shayan
    Unser, Michael
    [J]. NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 2021, 42 (06) : 679 - 695
  • [4] ON APPROXIMATING MATRIX NORMS IN DATA STREAMS
    Li, Yi
    Nguyen, Huy L.
    Woodruff, David P.
    [J]. SIAM JOURNAL ON COMPUTING, 2019, 48 (06) : 1643 - 1697
  • [5] Non-negative Matrix Factorization with Schatten p-norms Reguralization
    Redko, Ievgen
    Bennani, Younes
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2014), PT II, 2014, 8835 : 52 - 59
  • [6] Learning matrix quantization and relevance learning based on Schatten-p-norms
    Bohnsack, A.
    Domaschke, K.
    Kaden, M.
    Lange, M.
    Villmann, T.
    [J]. NEUROCOMPUTING, 2016, 192 : 104 - 114
  • [7] Mathematical Characterization of Sophisticated Variants for Relevance Learning in Learning Matrix Quantization Based on Schatten-p-norms
    Bohnsack, Andrea
    Domaschke, Kristin
    Kaden, Marika
    Lange, Mandy
    Villmann, Thomas
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, PT I, 2015, 9119 : 403 - 414
  • [8] Matrix Norms in Data Streams: Faster, Multi-Pass and Row-Order
    Braverman, Vladimir
    Chestnut, Stephen
    Krauthgamer, Robert
    Li, Yi
    Woodruff, David
    Yang, Lin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [9] Inner matrix norms in evolving Cauchy possibilistic clustering for classification and regression from data streams
    Skrjanc, Igor
    Blazic, Saso
    Lughofer, Edwin
    Dovzan, Dejan
    [J]. INFORMATION SCIENCES, 2019, 478 : 540 - 563