Scalable learning and inference in Markov logic networks

被引:6
|
作者
Sun, Zhengya [1 ]
Zhao, Yangyang [1 ]
Wei, Zhuoyu [1 ]
Zhang, Wensheng [1 ]
Wang, Jue [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing 100864, Peoples R China
基金
中国国家自然科学基金;
关键词
Markov logic networks; Structure learning; Probabilistic inference; Large scale machine learning;
D O I
10.1016/j.ijar.2016.12.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Markov logic networks (MLNs) have emerged as a powerful representation that incorporates first-order logic and probabilistic graphical models. They have shown very good results in many problem domains. However, current implementations of MLNs do not scale well due to the large search space and the intractable clause groundings, which is preventing their widespread adoption. In this paper, we propose a general framework named Ground Network Sampling (GNS) for scaling up MLN learning and inference. GNS offers a new instantiation perspective by encoding ground substitutions as simple paths in the Herbrand universe, which uses the interactions existing among the objects to constrain the search space. To further make this search tractable for large scale problems, GNS integrates random walks and subgraph pattern mining, gradually building up a representative subset of simple paths. When inference is concerned, a template network is introduced to quickly locate promising paths that can ground given logical statements. The resulting sampled paths are then transformed into ground clauses, which can be used for clause creation and probabilistic inference. The experiments on several real-world datasets demonstrate that our approach offers better scalability while maintaining comparable or better predictive performance compared to state-of-the-art MLN techniques. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:39 / 55
页数:17
相关论文
共 50 条
  • [31] Markov logic networks
    Matthew Richardson
    Pedro Domingos
    Machine Learning, 2006, 62 : 107 - 136
  • [32] Scalable inference for Markov processes with intractable likelihoods
    Jamie Owen
    Darren J. Wilkinson
    Colin S. Gillespie
    Statistics and Computing, 2015, 25 : 145 - 156
  • [33] Scalable inference for Markov processes with intractable likelihoods
    Owen, Jamie
    Wilkinson, Darren J.
    Gillespie, Colin S.
    STATISTICS AND COMPUTING, 2015, 25 (01) : 145 - 156
  • [34] Max-Margin Weight Learning for Markov Logic Networks
    Huynh, Tuyen N.
    Mooney, Raymond J.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT I, 2009, 5781 : 564 - 579
  • [35] Binding and Cross-Modal Learning in Markov Logic Networks
    Vrecko, Alen
    Skocaj, Danijel
    Leonardis, Ales
    ADAPTIVE AND NATURAL COMPUTING ALGORITHMS, PT II, 2011, 6594 : 235 - 244
  • [36] LEARNING COMPLEX EVENT MODELS USING MARKOV LOGIC NETWORKS
    Kardas, Karani
    Ulusoy, Ilkay
    Cicekli, Nihan Kesim
    ELECTRONIC PROCEEDINGS OF THE 2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2013,
  • [37] Structure Learning for Markov Logic Networks with Many Descriptive Attributes
    Khosravi, Hassan
    Schulte, Oliver
    Man, Tong
    Xu, Xiaoyuan
    Bina, Bahareh
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 487 - 493
  • [38] Interpretable Explanations for Probabilistic Inference in Markov Logic
    Al Farabi, Khan Mohammad
    Sarkhel, Somdeb
    Dey, Sanorita
    Venugopal, Deepak
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1256 - 1264
  • [39] Scaling-Up Inference in Markov Logic
    Venugopal, Deepak
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 4259 - 4260
  • [40] Quantified Markov Logic Networks
    Gutierrez-Basulto, Victor
    Jung, Jean Christoph
    Kuzelka, Ondrej
    SIXTEENTH INTERNATIONAL CONFERENCE ON PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING, 2018, : 602 - 611