Scalable Inference of Sparsely-changing Gaussian Markov Random Fields

被引:0
|
作者
Fattahi, Salar [1 ]
Gomez, Andres [2 ]
机构
[1] Univ Michigan, Dept Ind & Operat Engn, Ann Arbor, MI 48109 USA
[2] Univ Southern Calif, Dept Ind & Syst Engn, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
DECAY BOUNDS; COVARIANCE; ALGORITHM; MATRICES; INVERSE; RATES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the problem of inferring time-varying Gaussian Markov random fields, where the underlying graphical model is both sparse and changes sparsely over time. Most of the existing methods for the inference of time-varying Markov random fields (MRFs) rely on the regularized maximum likelihood estimation (MLE), that typically suffer from weak statistical guarantees and high computational time. Instead, we introduce a new class of constrained optimization problems for the inference of sparsely-changing Gaussian MRFs (GMRFs). The proposed optimization problem is formulated based on the exact l(0) regularization, and can be solved in near-linear time and memory. Moreover, we show that the proposed estimator enjoys a provably small estimation error. We derive sharp statistical guarantees in the high-dimensional regime, showing that such problems can be learned with as few as one sample per time period. Our proposed method is extremely efficient in practice: it can accurately estimate sparsely-changing GMRFs with more than 500 million variables in less than one hour.
引用
收藏
页数:13
相关论文
共 50 条