Lifted Message Passing as Reparametrization of Graphical Models

被引:0
|
作者
Mladenov, Martin [1 ]
Globerson, Amir [2 ]
Kersting, Kristian [1 ]
机构
[1] TU Dortmund Univ, Dortmund, Germany
[2] Hebrew Univ Jerusalem, Jerusalem, Israel
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lifted inference approaches can considerably speed up probabilistic inference in Markov random fields (MRFs) with symmetries. Given evidence, they essentially form a lifted, i.e., reduced factor graph by grouping together indistinguishable variables and factors. Typically, however, lifted factor graphs are not amenable to off-the-shelf message passing (MP) approaches, and hence requires one to use either generic optimization tools, which would be slow for these problems, or design modified MP algorithms. Here, we demonstrate that the reliance on modified MP can be eliminated for the class of MP algorithms arising from MAP-LP relaxations of pairwise MRFs. Specifically, we show that a given MRF induces a whole family of MRFs of different sizes sharing essentially the same MAP-LP solution. In turn, we give an efficient algorithm to compute from them the smallest one that can be solved using off-the-shelf MR This incurs no major overhead: the selected MRF is at most twice as large as the fully lifted factor graph. This has several implications for lifted inference. For instance, running MPLP results in the first convergent lifted MP approach for MAP-LP relaxations. Doing so can be faster than solving the MAP-LP using lifted linear programming. Most importantly, it suggests a novel view on lifted inference: it can be viewed as standard inference in a reparametrized model.
引用
收藏
页码:603 / 612
页数:10
相关论文
共 50 条
  • [1] Message Passing for Collective Graphical Models
    Sun, Tao
    Sheldon, Daniel
    Kumar, Akshat
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 853 - 861
  • [2] Consensus Message Passing for Layered Graphical Models
    Jampani, Varun
    Eslami, S. M. Ali
    Tarlow, Daniel
    Kohli, Pushmeet
    Winn, John
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 425 - 433
  • [3] Feedback Message Passing for Inference in Gaussian Graphical Models
    Liu, Ying
    Chandrasekaran, Venkat
    Anandkumar, Animashree
    Willsky, Alan S.
    [J]. 2010 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2010, : 1683 - 1687
  • [4] Distributed Message Passing for Large Scale Graphical Models
    Schwing, Alexander
    Hazan, Tamir
    Pollefeys, Marc
    Urtasun, Raquel
    [J]. 2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011,
  • [5] Stein Variational Message Passing for Continuous Graphical Models
    Wang, Dilin
    Zeng, Zhe
    Liu, Qiang
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [6] Feedback Message Passing for Inference in Gaussian Graphical Models
    Liu, Ying
    Chandrasekaran, Venkat
    Anandkumar, Animashree
    Willsky, Alan S.
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (08) : 4135 - 4150
  • [7] Track-stitching using graphical models and message passing
    van der Merwe, L. J.
    de Villiers, J. P.
    [J]. 2013 16TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2013, : 758 - 765
  • [8] Lifted Message Passing for Hybrid Probabilistic Inference
    Chen, Yuqiao
    Ruozzi, Nicholas
    Natarajan, Sriraam
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5701 - 5707
  • [9] Efficient Sequential Clamping for Lifted Message Passing
    Hadiji, Fabian
    Ahmadi, Babak
    Kersting, Kristian
    [J]. KI 2011: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2011, 7006 : 122 - 133
  • [10] Lifted graphical models: a survey
    Kimmig, Angelika
    Mihalkova, Lilyana
    Getoor, Lise
    [J]. MACHINE LEARNING, 2015, 99 (01) : 1 - 45