Fast and scalable learning of sparse changes in high-dimensional graphical model structure

被引:1
|
作者
Wang, Beilun [1 ]
Zhang, Jiaqi [2 ]
Xu, Haoqing [3 ]
Tao, Te [3 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Peoples R China
[2] Brown Univ, Dept Comp Sci, Providence, RI 02912 USA
[3] Southeast Univ, Sch Artificial Intelligence, Nanjing, Peoples R China
关键词
Gaussian graphical models; Elementary estimator; Nonparanormal distribution; Sparse changes; INVERSE COVARIANCE ESTIMATION; SELECTION; NETWORKS;
D O I
10.1016/j.neucom.2022.09.137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We focus on the problem of estimating the change in the dependency structures of two p-dimensional Gaussian Graphical models (GGMs). Previous studies for sparse change estimation in GGMs involve expensive and difficult non-smooth optimization. We propose a novel method, DIFFEE for estimating DIFFerential networks via an Elementary Estimator under a high-dimensional situation. DIFFEE is solved through a faster and closed-form solution that enables it to work in large-scale settings. Notice that GGM assumes data are generated from a Gaussian distribution. However, the Gaussian assumption is too strict and can not be satisfied with all the real-world data generated from a complex process. Therefore, we fur-ther extend DIFFEE to NPN-DIFFEE by assuming that data are drawn from the nonparanormal distribution (a large family of distributions) instead of a multivariate Gaussian distribution. Thus, NPN-DIFFEE is applicable to more general conditions. We conduct a rigorous statistical analysis showing that surpris-ingly DIFFEE achieves the same asymptotic convergence rates as the state-of-the-art estimators that are much more difficult to compute. Our experimental results on multiple synthetic datasets and one real-world data about brain connectivity show strong performance improvements over baselines, as well as significant computational benefits.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:39 / 57
页数:19
相关论文
共 50 条
  • [1] Fast and Scalable Learning of Sparse Changes in High-Dimensional Gaussian Graphical Model Structure
    Wang, Beilun
    Sekhon, Arshdeep
    Qi, Yanjun
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [2] On sparse high-dimensional graphical model learning for dependent time series
    Tugnait, Jitendra K.
    [J]. SIGNAL PROCESSING, 2022, 197
  • [3] SPARSE HIGH-DIMENSIONAL MATRIX-VALUED GRAPHICAL MODEL LEARNING FROM DEPENDENT DATA
    Tugnait, Jitendra K.
    [J]. 2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 344 - 348
  • [4] High-dimensional functional graphical model structure learning via neighborhood selection approach
    Zhao, Boxin
    Zhai, Percy S.
    Wang, Y. Samuel
    Kolar, Mladen
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2024, 18 (01): : 1042 - 1129
  • [5] The sparse structure of high-dimensional integrands
    Verlinden, P
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2001, 132 (01) : 33 - 49
  • [6] Learning Sparse High-Dimensional Matrix-Valued Graphical Models From Dependent Data
    Tugnait, Jitendra K.
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 3363 - 3379
  • [7] PCA learning for sparse high-dimensional data
    Hoyle, DC
    Rattray, M
    [J]. EUROPHYSICS LETTERS, 2003, 62 (01): : 117 - 123
  • [8] Similarity Learning for High-Dimensional Sparse Data
    Liu, Kuan
    Bellet, Aurelien
    Sha, Fei
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 653 - 662
  • [9] Group Learning for High-Dimensional Sparse Data
    Cherkassky, Vladimir
    Chen, Hsiang-Han
    Shiao, Han-Tai
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [10] COMPRESSED LEARNING OF HIGH-DIMENSIONAL SPARSE FUNCTIONS
    Schnass, Karin
    Vybiral, Jan
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 3924 - 3927