A recursive model-reduction method for approximate inference in Gaussian Markov random fields

被引:7
|
作者
Johnson, Jason K. [1 ]
Willsky, Alan S. [1 ,2 ]
机构
[1] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[2] Alphatech Inc, Burlington, MA USA
关键词
approximate inference; Gaussian Markov random fields; graphical models; information projection; model reduction; maximum entropy;
D O I
10.1109/TIP.2007.912018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents recursive cavity modeling-a principled, tractable approach to approximate, near-optimal inference for large Gauss-Markov random fields. The main idea is to subdivide the random field into smaller subfields, constructing cavity models which approximate these subfields. Each cavity model is a concise, yet faithful, model for the surface of one subfield sufficient for near-optimal inference in adjacent subfields. This basic idea leads to a tree-structured algorithm which recursively builds a hierarchy of cavity models during an "upward pass" and then builds a complementary set of blanket models during a reverse "downward pass." The marginal statistics of individual variables can then be approximated using their blanket models.. Model thinning plays an important role, allowing us to develops thinned cavity and blanket models thereby providing tractable approximate inference. We develop a maximum-entropy approach that exploits certain tractable representations of Fisher information on thin chordal graphs. Given the resulting set of thinned cavity models, we also develop a fast preconditioner, which provides a simple iterative method to compute optimal estimates. Thus, our overall approach combines recursive inference, variational learning and iterative estimation. We demonstrate the accuracy and scalability of this approach in several challenging, large-scale remote sensing problems.
引用
收藏
页码:70 / 83
页数:14
相关论文
共 50 条
  • [21] Gaussian Markov random fields: Theory and applications
    Congdon, Peter
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES A-STATISTICS IN SOCIETY, 2007, 170 : 858 - 858
  • [22] Fast sampling of Gaussian Markov random fields
    Rue, H
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2001, 63 : 325 - 338
  • [23] Approximate bayes model selection procedures for Gibbs-Markov random fields
    Seymour, L
    Ji, CS
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 1996, 51 (01) : 75 - 97
  • [25] On Learning Conditional Random Fields for Stereo Exploring Model Structures and Approximate Inference
    Pal, Christopher J.
    Weinman, Jerod J.
    Tran, Lam C.
    Scharstein, Daniel
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2012, 99 (03) : 319 - 337
  • [26] The Markov method for optimal nonlinear estimation of gaussian stationary divisible random fields
    Yarlykov, MS
    Shvetsov, VI
    RADIOTEKHNIKA I ELEKTRONIKA, 1997, 42 (04): : 442 - 450
  • [27] GENERALIZED METHOD OF MOMENTS APPROACH TO HYPERPARAMETER ESTIMATION FOR GAUSSIAN MARKOV RANDOM FIELDS
    Song, Eunhye
    Dong, Yi
    2018 WINTER SIMULATION CONFERENCE (WSC), 2018, : 1790 - 1801
  • [28] PET image segmentation using a Gaussian mixture model and Markov random fields
    Layer T.
    Blaickner M.
    Knäusl B.
    Georg D.
    Neuwirth J.
    Baum R.P.
    Schuchardt C.
    Wiessalla S.
    Matz G.
    EJNMMI Physics, 2 (1) : 1 - 15
  • [29] Model Selection in Local Approximation Gaussian Processes: A Markov Random Fields Approach
    Jalali, Hamed
    Pawelczyk, Martin
    Kasneci, Gjergji
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 768 - 778
  • [30] Information-theoretic bounds on model selection for Gaussian Markov random fields
    Wang, Wei
    Wainwright, Martin J.
    Ramchandran, Kannan
    2010 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2010, : 1373 - 1377