ZNN - A Fast and Scalable Algorithm for Training 3D Convolutional Networks on Multi-Core and Many-Core Shared Memory Machines

被引:20
|
作者
Zlateski, Aleksandar [1 ]
Lee, Kisuk [2 ]
Seung, H. Sebastian [3 ,4 ]
机构
[1] MIT, Elect Engn & Comp Sci Dept, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] MIT, Brain & Cognit Sci Dept, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[3] Princeton Univ, Princeton Neurosci Inst, Princeton, NJ 08540 USA
[4] Princeton Univ, Dept Comp Sci, Princeton, NJ 08540 USA
关键词
NEURAL-NETWORKS;
D O I
10.1109/IPDPS.2016.119
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional networks (ConvNets) have become a popular approach to computer vision. It is important to accelerate ConvNet training, which is computationally costly. We propose a novel parallel algorithm based on decomposition into a set of tasks, most of which are convolutions or FFTs. Applying Brent's theorem to the task dependency graph implies that linear speedup with the number of processors is attainable within the PRAM model of parallel computation, for wide network architectures. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. We implement the algorithm with a publicly available software package called ZNN. Benchmarking with multi-core CPUs shows that ZNN can attain speedup roughly equal to the number of physical cores. We also show that ZNN can attain over 90x speedup on a many-core CPU (Xeon PhiTMKnights Corner). These speedups are achieved for network architectures with widths that are in common use. The task parallelism of the ZNN algorithm is suited to CPUs, while the SIMD parallelism of previous algorithms is compatible with GPUs. Through examples, we show that ZNN can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch. ZNN may be less costly to develop and maintain, due to the relative ease of general-purpose CPU programming.
引用
收藏
页码:801 / 811
页数:11
相关论文
共 50 条
  • [1] A Fast and Scalable Graph Coloring Algorithm for Multi-core and Many-core Architectures
    Rokos, Georgios
    Gorman, Gerard
    Kelly, Paul H. J.
    [J]. EURO-PAR 2015: PARALLEL PROCESSING, 2015, 9233 : 414 - 425
  • [2] Fast and scalable quantum computing simulation on multi-core and many-core platforms
    Armin Ahmadzadeh
    Hamid Sarbazi-Azad
    [J]. Quantum Information Processing, 22
  • [3] Fast and scalable quantum computing simulation on multi-core and many-core platforms
    Ahmadzadeh, Armin
    Sarbazi-Azad, Hamid
    [J]. QUANTUM INFORMATION PROCESSING, 2023, 22 (05)
  • [4] Multi-core and many-core shared-memory parallel raycasting volume rendering optimization and tuning
    Bethel, E. Wes
    Howison, Mark
    [J]. INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2012, 26 (04): : 399 - 412
  • [5] On the parallelization of Hirschberg's algorithm for multi-core and many-core systems
    Joao, Mario, Jr.
    Sena, Alexandre C.
    Rebello, Vinod E. F.
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2019, 31 (18):
  • [6] Performance analysis of a 3D unstructured mesh hydrodynamics code on multi-core and many-core architectures
    Waltz, J.
    Wohlbier, J. G.
    Risinger, L. D.
    Canfield, T. R.
    Charest, M. R. J.
    Long, A. R.
    Morgan, N. R.
    [J]. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, 2015, 77 (06) : 319 - 333
  • [7] Optimizing the Linear Fascicle Evaluation Algorithm for Multi-core and Many-core Systems
    Aggarwal, Karan
    Bondhugula, Uday
    [J]. ACM TRANSACTIONS ON PARALLEL COMPUTING, 2020, 7 (04)
  • [8] Scalable SIMD-parallel memory allocation for many-core machines
    Huang, Xiaohuang
    Rodrigues, Christopher I.
    Jones, Stephen
    Buck, Ian
    Hwu, Wen-mei
    [J]. JOURNAL OF SUPERCOMPUTING, 2013, 64 (03): : 1008 - 1020
  • [9] Scalable SIMD-parallel memory allocation for many-core machines
    Xiaohuang Huang
    Christopher I. Rodrigues
    Stephen Jones
    Ian Buck
    Wen-mei Hwu
    [J]. The Journal of Supercomputing, 2013, 64 : 1008 - 1020
  • [10] Fast parallel genetic programming: multi-core CPU versus many-core GPU
    Chitty, Darren M.
    [J]. SOFT COMPUTING, 2012, 16 (10) : 1795 - 1814