Accelerated Message Passing for Entropy-Regularized MAP Inference

被引:0
|
作者
Lee, Jonathan N. [1 ]
Pacchiano, Aldo [2 ]
Bartlett, Peter [2 ,3 ]
Jordan, Michael, I [2 ,3 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[3] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
关键词
COORDINATE DESCENT METHODS; POLYNOMIAL-TIME ALGORITHM; FASTER ALGORITHMS; LINEAR-PROGRAMS; RELAXATIONS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Maximum a posteriori (MAP) inference in discrete-valued Markov random fields is a fundamental problem in machine learning that involves identifying the most likely configuration of random variables given a distribution. Due to the difficulty of this combinatorial problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms that are often interpreted as coordinate descent on the dual LP. To achieve more desirable computational properties, a number of methods regularize the LP with an entropy term, leading to a class of smooth message passing algorithms with convergence guarantees. In this paper, we present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient methods. The proposed algorithms incorporate the familiar steps of standard smooth message passing algorithms, which can be viewed as coordinate minimization steps. We show that these accelerated variants achieve faster rates for finding epsilon-optimal points of the unregularized problem, and, when the LP is tight, we prove that the proposed algorithms recover the true MAP solution in fewer iterations than standard message passing algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Convergence Rates of Smooth Message Passing with Rounding in Entropy-Regularized MAP Inference
    Lee, Jonathan N.
    Pacchiano, Aldo
    Jordan, Michael I.
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3003 - 3013
  • [2] Entropy-Regularized Stochastic Games
    Savas, Yagiz
    Ahmadi, Mohamadreza
    Tanaka, Takashi
    Topcu, Ufuk
    [J]. 2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5955 - 5962
  • [3] ENTROPY-REGULARIZED OPTIMAL TRANSPORT GENERATIVE MODELS
    Liu, Dong
    Minh Thanh Vu
    Chatterjee, Saikat
    Rasmussen, Lars K.
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3532 - 3536
  • [4] An Entropy-Regularized ADMM For Binary Quadratic Programming
    Haoming Liu
    Kangkang Deng
    Haoyang Liu
    Zaiwen Wen
    [J]. Journal of Global Optimization, 2023, 87 : 447 - 479
  • [5] An Entropy-Regularized ADMM For Binary Quadratic Programming
    Liu, Haoming
    De, Kangkang
    Liu, Haoyang
    Wen, Zaiwen
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 2023, 87 (2-4) : 447 - 479
  • [6] RELATIVE ENTROPY-REGULARIZED ROBUST OPTIMAL ORDER EXECUTION
    Wang, Meng
    Wang, Tai-Ho
    [J]. arXiv, 2023,
  • [7] Entropy-Regularized Partially Observed Markov Decision Processes
    Molloy, Timothy L.
    Nair, Girish N.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (01) : 379 - 386
  • [8] Planning in entropy-regularized Markov decision processes and games
    Grill, Jean-Bastien
    Domingues, Omar D.
    Menard, Pierre
    Munos, Remi
    Valko, Michal
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] NUMERICAL COMPUTATION OF ENTROPY-REGULARIZED QUADRATIC OPTIMIZATION PROBLEMS
    Shi, Piqin
    Wang, Chengjing
    Xiang, Can
    Tang, Peipei
    [J]. Journal of Applied and Numerical Optimization, 2024, 6 (01): : 59 - 70
  • [10] An Entropy-Regularized Framework for Detecting Copy Number Variants
    Mohammadi, Majid
    Farahi, Fahime
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2019, 66 (03) : 682 - 688