Scalable learning and inference in Markov logic networks

被引:6
|
作者
Sun, Zhengya [1 ]
Zhao, Yangyang [1 ]
Wei, Zhuoyu [1 ]
Zhang, Wensheng [1 ]
Wang, Jue [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing 100864, Peoples R China
基金
中国国家自然科学基金;
关键词
Markov logic networks; Structure learning; Probabilistic inference; Large scale machine learning;
D O I
10.1016/j.ijar.2016.12.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Markov logic networks (MLNs) have emerged as a powerful representation that incorporates first-order logic and probabilistic graphical models. They have shown very good results in many problem domains. However, current implementations of MLNs do not scale well due to the large search space and the intractable clause groundings, which is preventing their widespread adoption. In this paper, we propose a general framework named Ground Network Sampling (GNS) for scaling up MLN learning and inference. GNS offers a new instantiation perspective by encoding ground substitutions as simple paths in the Herbrand universe, which uses the interactions existing among the objects to constrain the search space. To further make this search tractable for large scale problems, GNS integrates random walks and subgraph pattern mining, gradually building up a representative subset of simple paths. When inference is concerned, a template network is introduced to quickly locate promising paths that can ground given logical statements. The resulting sampled paths are then transformed into ground clauses, which can be used for clause creation and probabilistic inference. The experiments on several real-world datasets demonstrate that our approach offers better scalability while maintaining comparable or better predictive performance compared to state-of-the-art MLN techniques. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:39 / 55
页数:17
相关论文
共 50 条
  • [1] Bayesian Markov Logic Networks Bayesian Inference for Statistical Relational Learning
    Nedbal, Radim
    Serafini, Luciano
    AI*IA 2018 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, 11298 : 348 - 361
  • [2] Quantum Enhanced Inference in Markov Logic Networks
    Wittek, Peter
    Gogolin, Christian
    SCIENTIFIC REPORTS, 2017, 7
  • [3] Lifted MAP Inference for Markov Logic Networks
    Sarkhel, Somdeb
    Venugopal, Deepak
    Singla, Parag
    Gogate, Vibhav
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 33, 2014, 33 : 859 - 867
  • [4] Quantum Enhanced Inference in Markov Logic Networks
    Peter Wittek
    Christian Gogolin
    Scientific Reports, 7
  • [5] Approximate Online Inference for Dynamic Markov Logic Networks
    Geier, Thomas
    Biundo, Susanne
    2011 23RD IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2011), 2011, : 764 - 768
  • [6] Reinforcement Learning with Markov Logic Networks
    Wang, Weiwei
    Gao, Yang
    Chen, Xingguo
    Ge, Shen
    MICAI 2008: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2008, 5317 : 230 - 242
  • [7] Boosting learning and inference in Markov logic through metaheuristics
    Marenglen Biba
    Stefano Ferilli
    Floriana Esposito
    Applied Intelligence, 2011, 34 : 279 - 298
  • [8] Boosting learning and inference in Markov logic through metaheuristics
    Biba, Marenglen
    Ferilli, Stefano
    Esposito, Floriana
    APPLIED INTELLIGENCE, 2011, 34 (02) : 279 - 298
  • [9] Scalable Training of Markov Logic Networks Using Approximate Counting
    Sarkhel, Somdeb
    Venugopal, Deepak
    Tuan Anh Pham
    Singla, Parag
    Gogate, Vibhav
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1067 - 1073
  • [10] Numerical Markov Logic Network: A Scalable Probabilistic Framework for Hybrid Knowledge Inference
    Zhong, Ping
    Li, Zhanhuai
    Chen, Qun
    Hou, Boyi
    Ahmed, Murtadha
    INFORMATION, 2021, 12 (03)