Scalable learning and inference in Markov logic networks

被引:6
|
作者
Sun, Zhengya [1 ]
Zhao, Yangyang [1 ]
Wei, Zhuoyu [1 ]
Zhang, Wensheng [1 ]
Wang, Jue [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing 100864, Peoples R China
基金
中国国家自然科学基金;
关键词
Markov logic networks; Structure learning; Probabilistic inference; Large scale machine learning;
D O I
10.1016/j.ijar.2016.12.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Markov logic networks (MLNs) have emerged as a powerful representation that incorporates first-order logic and probabilistic graphical models. They have shown very good results in many problem domains. However, current implementations of MLNs do not scale well due to the large search space and the intractable clause groundings, which is preventing their widespread adoption. In this paper, we propose a general framework named Ground Network Sampling (GNS) for scaling up MLN learning and inference. GNS offers a new instantiation perspective by encoding ground substitutions as simple paths in the Herbrand universe, which uses the interactions existing among the objects to constrain the search space. To further make this search tractable for large scale problems, GNS integrates random walks and subgraph pattern mining, gradually building up a representative subset of simple paths. When inference is concerned, a template network is introduced to quickly locate promising paths that can ground given logical statements. The resulting sampled paths are then transformed into ground clauses, which can be used for clause creation and probabilistic inference. The experiments on several real-world datasets demonstrate that our approach offers better scalability while maintaining comparable or better predictive performance compared to state-of-the-art MLN techniques. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:39 / 55
页数:17
相关论文
共 50 条
  • [41] On Projectivity in Markov Logic Networks
    Malhotra, Sagar
    Serafini, Luciano
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT V, 2023, 13717 : 223 - 238
  • [42] Encoding Markov Logic Networks in Possibilistic Logic
    Kuzelka, Ondrej
    Davis, Jesse
    Schockaert, Steven
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2015, : 454 - 463
  • [43] Few-shot activity learning by dual Markov logic networks
    Zhang, Zhimin
    Zhu, Tao
    Gao, Dazhi
    Xu, Jiabo
    Liu, Hong
    Ning, Huansheng
    KNOWLEDGE-BASED SYSTEMS, 2022, 240
  • [44] Learning Markov logic networks with limited number of labeled training examples
    Wong, Tak-Lam
    INTERNATIONAL JOURNAL OF KNOWLEDGE-BASED AND INTELLIGENT ENGINEERING SYSTEMS, 2014, 18 (02) : 91 - 98
  • [45] Augmenting Deep Learning with Relational Knowledge from Markov Logic Networks
    Islam, Mohammad Maminur
    Sarkhel, Somdeb
    Venugopal, Deepak
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 54 - 63
  • [46] Structure Learning of Markov Logic Networks through Iterated Local Search
    Biba, Marenglen
    Ferilli, Stefano
    Esposito, Floriana
    ECAI 2008, PROCEEDINGS, 2008, 178 : 361 - +
  • [47] Modeling binding and cross-modal learning in Markov logic networks
    Vrecko, Alen
    Leonardis, Ales
    Skocaj, Danijel
    NEUROCOMPUTING, 2012, 96 : 29 - 36
  • [48] A posterior-based method for Markov logic networks parameters learning
    Sun, Shuyang
    Chen, Jianzhong
    Liu, Dayou
    Sun, Chengmin
    PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, VOLS 1 AND 2, 2006, : 529 - 534
  • [49] RegRocket: Scalable Multinomial Autologistic Regression with Unordered Categorical Variables Using Markov Logic Networks
    Sabek, Ibrahim
    Musleh, Mashaal
    Mokbel, Mohamed F.
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2019, 5 (04)
  • [50] Scalable Bayesian Inference for Coupled Hidden Markov and Semi-Markov Models
    Touloupou, Panayiota
    Finkenstadt, Barbel
    Spencer, Simon E. F.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2020, 29 (02) : 238 - 249