Learning efficient logic programs

被引:0
|
作者
Andrew Cropper
Stephen H. Muggleton
机构
[1] University of Oxford,Department of Computer Science
[2] Imperial College London,Department of Computing
来源
Machine Learning | 2019年 / 108卷
关键词
Minimal Cost Programs; Robot Strategy; Hypothesis Space; Metagol; Lower Resource Complexities;
D O I
暂无
中图分类号
学科分类号
摘要
When machine learning programs from data, we ideally want to learn efficient rather than inefficient programs. However, existing inductive logic programming (ILP) techniques cannot distinguish between the efficiencies of programs, such as permutation sort (n!) and merge sort O(nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n\;log\;n)$$\end{document}. To address this limitation, we introduce Metaopt, an ILP system which iteratively learns lower cost logic programs, each time further restricting the hypothesis space. We prove that given sufficiently large numbers of examples, Metaopt converges on minimal cost programs, and our experiments show that in practice only small numbers of examples are needed. To learn minimal time-complexity programs, including non-deterministic programs, we introduce a cost function called tree cost which measures the size of the SLD-tree searched when a program is given a goal. Our experiments on programming puzzles, robot strategies, and real-world string transformation problems show that Metaopt learns minimal cost programs. To our knowledge, Metaopt is the first machine learning approach that, given sufficient numbers of training examples, is guaranteed to learn minimal cost logic programs, including minimal time-complexity programs.
引用
收藏
页码:1063 / 1083
页数:20
相关论文
共 50 条
  • [21] Precise and efficient groundness analysis for logic programs
    Marriott, Kim
    Sondergaard, Harald
    ACM letters on programming languages and systems, 1993, 2 (1-4): : 181 - 196
  • [22] Yet more efficient EM learning for parameterized logic programs by inter-goal sharing
    Kameya, Y
    Sato, T
    Zhou, NF
    ECAI 2004: 16TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 110 : 490 - 494
  • [23] Mind change complexity of learning logic programs
    Jain, S
    Sharma, A
    THEORETICAL COMPUTER SCIENCE, 2002, 284 (01) : 143 - 160
  • [24] Learning higher-order logic programs
    Andrew Cropper
    Rolf Morel
    Stephen Muggleton
    Machine Learning, 2020, 109 : 1289 - 1322
  • [25] Constraint-Driven Learning of Logic Programs
    Morel, Rolf
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15726 - 15727
  • [26] Strategies in Combined Learning via Logic Programs
    Evelina Lamma
    Fabrizio Riguzzi
    Luís Moniz Pereira
    Machine Learning, 2000, 38 : 63 - 87
  • [27] Lifted discriminative learning of probabilistic logic programs
    Arnaud Nguembang Fadja
    Fabrizio Riguzzi
    Machine Learning, 2019, 108 : 1111 - 1135
  • [28] Basic principles of learning bayesian logic programs
    Kersting, Kristian
    De Raedt, Luc
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008, 4911 LNAI : 189 - 221
  • [29] Learning logic programs with random classification noise
    Horváth, T
    Sloan, RH
    Turán, G
    INDUCTIVE LOGIC PROGRAMMING, 1997, 1314 : 315 - 336
  • [30] Learning structure and parameters of Stochastic Logic Programs
    Muggleton, S
    INDUCTIVE LOGIC PROGRAMMING, 2003, 2583 : 198 - 206