Sparse and low-rank matrix regularization for learning time-varying Markov networks

被引:0
|
作者
Jun-ichiro Hirayama
Aapo Hyvärinen
Shin Ishii
机构
[1] Brain Information Communication Research Laboratory Group,Department of Computer Science and HIIT
[2] Advanced Telecommunications Research Institute International (ATR),Graduate School of Informatics
[3] University of Helsinki,undefined
[4] Kyoto University,undefined
来源
Machine Learning | 2016年 / 105卷
关键词
Time-varying Markov network; Nuclear-norm regularization; L1-norm regularization; Alternating direction method of multipliers;
D O I
暂无
中图分类号
学科分类号
摘要
Statistical dependencies observed in real-world phenomena often change drastically with time. Graphical dependency models, such as the Markov networks (MNs), must deal with this temporal heterogeneity in order to draw meaningful conclusions about the transient nature of the target phenomena. However, in practice, the estimation of time-varying dependency graphs can be inefficient due to the potentially large number of parameters of interest. To overcome this problem, we propose such a novel approach to learning time-varying MNs that effectively reduces the number of parameters by constraining the rank of the parameter matrix. The underlying idea is that the effective dimensionality of the parameter space is relatively low in many realistic situations. Temporal smoothness and sparsity of the network are also incorporated as in previous studies. The proposed method is formulated as a convex minimization of a smoothed empirical loss with both the ℓ1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document}- and the nuclear-norm regularization terms. This non-smooth optimization problem is numerically solved by the alternating direction method of multipliers. We take the Ising model as a fundamental example of an MN, and we show in several simulation studies that the rank-reducing effect from the nuclear norm can improve the estimation performance of time-varying dependency graphs. We also demonstrate the utility of the method for analyzing real-world datasets for improving the interpretability and predictability of the obtained networks.
引用
收藏
页码:335 / 366
页数:31
相关论文
共 50 条
  • [1] Sparse and low-rank matrix regularization for learning time-varying Markov networks
    Hirayama, Jun-ichiro
    Hyvarinen, Aapo
    Ishii, Shin
    [J]. MACHINE LEARNING, 2016, 105 (03) : 335 - 366
  • [2] Sparse and Low-Rank Estimation of Time-Varying Markov Networks with Alternating Direction Method of Multipliers
    Hirayama, Jun-ichiro
    Hyvarinen, Aapo
    Ishii, Shin
    [J]. NEURAL INFORMATION PROCESSING: THEORY AND ALGORITHMS, PT I, 2010, 6443 : 371 - +
  • [3] Fast Low-Rank Matrix Learning with Nonconvex Regularization
    Yao, Quanming
    Kwok, James T.
    Zhong, Wenliang
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 539 - 548
  • [4] Pursuit of Low-Rank Models of Time-Varying Matrices Robust to Sparse and Measurement Noise
    Akhriev, Albert
    Marecek, Jakub
    Simonetto, Andrea
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3171 - 3178
  • [5] Low-rank tensor recovery for topological interference management in time-varying networks
    Jiang, Xue
    Zheng, Baoyu
    Wang, Lei
    Hou, Xiaoyun
    [J]. DIGITAL SIGNAL PROCESSING, 2023, 133
  • [6] Time-Varying Autoregression with Low-Rank Tensors\ast
    Harris, Kameron Decker
    Aravkin, Aleksandr
    Rao, Rajesh
    Brunton, Bingni Wen
    [J]. SIAM JOURNAL ON APPLIED DYNAMICAL SYSTEMS, 2021, 20 (04): : 2335 - 2358
  • [7] Simultaneously Sparse and Low-Rank Matrix Reconstruction via Nonconvex and Nonseparable Regularization
    Chen, Wei
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2018, 66 (20) : 5313 - 5323
  • [8] Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery
    Fan, Jicong
    Ding, Lijun
    Chen, Yudong
    Udell, Madeleine
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] LOW-RANK MATRIX COMPLETION BY VARIATIONAL SPARSE BAYESIAN LEARNING
    Babacan, S. Derin
    Luessi, Martin
    Molina, Rafael
    Katsaggelos, Aggelos K.
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 2188 - 2191
  • [10] Sparse and Low-Rank Matrix Decompositions
    Chandrasekaran, Venkat
    Sanghavi, Sujay
    Parrilo, Pablo A.
    Willsky, Alan S.
    [J]. 2009 47TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, VOLS 1 AND 2, 2009, : 962 - +