Parallel ILP for distributed-memory architectures

被引:0
|
作者
Nuno A. Fonseca
Ashwin Srinivasan
Fernando Silva
Rui Camacho
机构
[1] Universidade do Porto,Instituto de Biologia Molecular e Celular (IBMC) & CRACS
[2] Indian Institute of Technology,IBM India Research Laboratory, Block 1
[3] University of New South Wales,Department of CSE & Centre for Health Informatics
[4] Universidade do Porto,CRACS & Faculdade de Ciências
[5] Universidade do Porto,LIAAD & Faculdade de Engenharia
来源
Machine Learning | 2009年 / 74卷
关键词
ILP; Parallelism; Efficiency;
D O I
暂无
中图分类号
学科分类号
摘要
The growth of machine-generated relational databases, both in the sciences and in industry, is rapidly outpacing our ability to extract useful information from them by manual means. This has brought into focus machine learning techniques like Inductive Logic Programming (ILP) that are able to extract human-comprehensible models for complex relational data. The price to pay is that ILP techniques are not efficient: they can be seen as performing a form of discrete optimisation, which is known to be computationally hard; and the complexity is usually some super-linear function of the number of examples. While little can be done to alter the theoretical bounds on the worst-case complexity of ILP systems, some practical gains may follow from the use of multiple processors. In this paper we survey the state-of-the-art on parallel ILP. We implement several parallel algorithms and study their performance using some standard benchmarks. The principal findings of interest are these: (1) of the techniques investigated, one that simply constructs models in parallel on each processor using a subset of data and then combines the models into a single one, yields the best results; and (2) sequential (approximate) ILP algorithms based on randomized searches have lower execution times than (exact) parallel algorithms, without sacrificing the quality of the solutions found.
引用
收藏
页码:257 / 279
页数:22
相关论文
共 50 条
  • [1] Parallel ILP for distributed-memory architectures
    Fonseca, Nuno A.
    Srinivasan, Ashwin
    Silva, Fernando
    Camacho, Rui
    [J]. MACHINE LEARNING, 2009, 74 (03) : 257 - 279
  • [2] PSEUDOSPECTRAL CORRELATION METHODS ON DISTRIBUTED-MEMORY PARALLEL ARCHITECTURES
    MARTINEZ, TJ
    CARTER, EA
    [J]. CHEMICAL PHYSICS LETTERS, 1995, 241 (5-6) : 490 - 496
  • [3] PARALLEL RENDERING OF VOLUMETRIC DATA SET ON DISTRIBUTED-MEMORY ARCHITECTURES
    MONTANI, C
    PEREGO, R
    SCOPIGNO, R
    [J]. CONCURRENCY-PRACTICE AND EXPERIENCE, 1993, 5 (02): : 153 - 167
  • [4] Compiling Affine Loop Nests for Distributed-Memory Parallel Architectures
    Bondhugula, Uday
    [J]. 2013 INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SC), 2013,
  • [5] COMPILING FOR DISTRIBUTED-MEMORY ARCHITECTURES
    ROGERS, A
    PINGALI, K
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 1994, 5 (03) : 281 - 298
  • [6] Parallel Out-of-Core MLFMA on Distributed-Memory Computer Architectures
    Hidayetoglu, Mert
    Gurel, Levent
    [J]. 2015 COMPUTATIONAL ELECTROMAGNETICS INTERNATIONAL WORKSHOP (CEM'15), 2015, : 18 - 19
  • [7] Parallel Genehunter: implementation of a linkage analysis package for distributed-memory architectures
    Conant, GC
    Plimpton, SJ
    Old, W
    Wagner, A
    Fain, PR
    Pacheco, TR
    Heffelfinger, G
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2003, 63 (7-8) : 674 - 682
  • [8] Parallelizing RRT on Distributed-Memory Architectures
    Devaurs, Didier
    Simeon, Thierry
    Cortes, Juan
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011, : 2261 - 2266
  • [9] Code Generation for Distributed-Memory Architectures
    Zhao, Jie
    Zhao, Rongcai
    Xu, Jinchen
    [J]. COMPUTER JOURNAL, 2016, 59 (01): : 119 - 132
  • [10] IMPLEMENTATION OF A PARALLEL UNSTRUCTURED EULER SOLVER ON SHARED-MEMORY AND DISTRIBUTED-MEMORY ARCHITECTURES
    MAVRIPLIS, DJ
    DAS, R
    SALTZ, J
    VERMELAND, RE
    [J]. JOURNAL OF SUPERCOMPUTING, 1995, 8 (04): : 329 - 344