Blending Learning and Inference in Conditional Random Fields

被引:0
|
作者
Hazan, Tamir [1 ]
Schwing, Alexander G. [2 ]
Urtasun, Raquel [3 ]
机构
[1] Technion Israel Inst Technol, IL-32000 Haifa, Israel
[2] Univ Illinois, Elect & Comp Engn & Coordinated Sci Lab, Urbana, IL 61801 USA
[3] Univ Toronto, 40 St George St, Toronto, ON M5S 2E4, Canada
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Conditional random fields maximize the log-likelihood of training labels given the training data, e.g., objects given images. In many cases the training labels are structures that consist of a set of variables and the computational complexity for estimating their likelihood is exponential in the number of the variables. Learning algorithms relax this computational burden using approximate inference that is nested as a sub-procedure. In this paper we describe the objective function for nested learning and inference in conditional random fields. The devised objective maximizes the log-beliefs - probability distributions over subsets of training variables that agree on their marginal probabilities. This objective is concave and consists of two types of variables that are related to the learning and inference tasks respectively. Importantly, we afterwards show how to blend the learning and inference procedure and effectively get to the identical optimum much faster. The proposed algorithm currently achieves the state-of-the-art in various computer vision applications.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] VARIATIONAL INFERENCE FOR CONDITIONAL RANDOM FIELDS
    Liao, Chih-Pin
    Chien, Jen-Tzung
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 2002 - 2005
  • [2] On Learning Conditional Random Fields for StereoExploring Model Structures and Approximate Inference
    Christopher J. Pal
    Jerod J. Weinman
    Lam C. Tran
    Daniel Scharstein
    [J]. International Journal of Computer Vision, 2012, 99 : 319 - 337
  • [3] Efficient inference in large conditional random fields
    Cohn, Trevor
    [J]. MACHINE LEARNING: ECML 2006, PROCEEDINGS, 2006, 4212 : 606 - 613
  • [4] On Learning Conditional Random Fields for Stereo Exploring Model Structures and Approximate Inference
    Pal, Christopher J.
    Weinman, Jerod J.
    Tran, Lam C.
    Scharstein, Daniel
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2012, 99 (03) : 319 - 337
  • [5] Learning conditional random fields for stereo
    Scharstein, Daniel
    Pal, Chris
    [J]. 2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 1688 - +
  • [6] Learning flexible features for conditional random fields
    Stewart, Liam
    He, Xuming
    Zemel, Richard S.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (08) : 1415 - 1426
  • [7] Efficient Piecewise Learning for Conditional Random Fields
    Alahari, Karteek
    Russell, Chris
    Torr, Philip H. S.
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 895 - 901
  • [8] COMPARING INFERENCE METHODS FOR CONDITIONAL RANDOM FIELDS FOR HYPERSPECTRAL IMAGE CLASSIFICATION
    Hu, Yang
    Monteiro, Sildomar T.
    Saber, Eli
    [J]. 2015 7TH WORKSHOP ON HYPERSPECTRAL IMAGE AND SIGNAL PROCESSING: EVOLUTION IN REMOTE SENSING (WHISPERS), 2015,
  • [9] Calibration of conditional composite likelihood for Bayesian inference on Gibbs random fields
    Stoehr, Julien
    Friel, Nial
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 921 - 929
  • [10] Learning Conditional Random Fields for Classification of Hyperspectral Images
    Zhong, Ping
    Wang, Runsheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2010, 19 (07) : 1890 - 1907