MASSIVELY PARALLEL ARCHITECTURES FOR LARGE-SCALE NEURAL NETWORK SIMULATIONS

被引:13
|
作者
FUJIMOTO, Y
FUKUDA, N
AKABANE, T
机构
[1] SHARP CO LTD,INTEGRATED CIRCUITS GRP,CTR IC DEV,RES STAFF,TENRI,NARA 632,JAPAN
[2] SHARP CO LTD,CORP RES & DEV GRP,CTR INFORMAT SYST RES & DEV,TENRI,NARA 632,JAPAN
来源
关键词
D O I
10.1109/72.165590
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A toroidal lattice architecture (TLA) and a planar lattice architecture (PIA) as massively parallel architectures of neurocomputers for large scale neural network simulations are proposed. The performances of these architectures are almost proportional to the number of node processors and they adopt the most efficient two-dimensional processor connections to be implemented by the wafer scale integration (WSI) technology to date. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large scale neural network simulations. Furthermore, these architectures will have an even greater expandability of parallelism and exhibit great flexibility for the various configurations of neural networks and varieties of neuron models. First we define the general neuron model that is the basis of these massively parallel architectures. Then, we take a multilayer perceptron (MLP) as a typical example of neural networks and describe the simulation of the MLP using error back propagation learning algorithms on virtual processors (VP's) with the TLA and the PLA. Then, the mapping from the VP's to physical node processors with the same TLA and PLA is presented. This mapping is done by row and column partitions. At the same time, the row and column permutations are carried out for node processor load balancing. The mapping algorithm for the load balancing is given. An equation to estimate the performance of these architectures is also presented. Finally, we describe implementation of the TLA with transputers including a parallel processor configuration, load balance algorithm, and evaluation of its performance. We have implemented a Hopfield neural network and a MLP, and applied them to the traveling salesman problem (TSP) and the identity mapping (IM), respectively. The TLA neurocomputer has achieved 2 MCPS in a feedforward network and 600 KCUPS in a back propagation network using 16 transputers. Actual proof that its performance increases almost in proportion to the number of node processors is given.
引用
收藏
页码:876 / 888
页数:13
相关论文
共 50 条
  • [41] nbodykit: An Open-source, Massively Parallel Toolkit for Large-scale Structure
    Hand, Nick
    Feng, Yu
    Beutler, Florian
    Li, Yin
    Modi, Chirag
    Seljak, Uros
    Slepian, Zachary
    ASTRONOMICAL JOURNAL, 2018, 156 (04):
  • [42] Implementation of a Massively Parallel Dynamic Security Assessment Platform for Large-Scale Grids
    Konstantelos, Ioannis
    Jamgotchian, Geoffroy
    Tindemans, Simon
    Duchesne, Philippe
    Cole, Stijn
    Merckx, Christian
    Strbac, Goran
    Panciatici, Partick
    2018 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2018,
  • [43] A Massively Parallel BWP Algorithm for Solving Large-Scale Systems of Nonlinear Equations
    Silva, Bruno
    Lopes, Luiz Guerreiro
    2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC, 2023,
  • [44] Experience-Dependent Axonal Plasticity in Large-Scale Spiking Neural Network Simulations
    Niedermeier, Lars
    Krichmar, Jeffrey L.
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [45] Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations
    Behler, Joerg
    PHYSICAL CHEMISTRY CHEMICAL PHYSICS, 2011, 13 (40) : 17930 - 17955
  • [46] Implementation of a Massively Parallel Dynamic Security Assessment Platform for Large-Scale Grids
    Konstantelos, Ioannis
    Jamgotchian, Geoffroy
    Tindemans, Simon H.
    Duchesne, Philippe
    Cole, Stijn
    Merckx, Christian
    Strbac, Goran
    Panciatici, Patrick
    IEEE TRANSACTIONS ON SMART GRID, 2017, 8 (03) : 1417 - 1426
  • [47] Surrogate Population Models for Large-Scale Neural Simulations
    Tripp, Bryan P.
    NEURAL COMPUTATION, 2015, 27 (06) : 1186 - 1222
  • [48] Large-scale neural network for sentence processing
    Cooke, A
    Grossman, M
    DeVita, C
    Gonzalez-Atavales, J
    Moore, P
    Chen, W
    Gee, J
    Detre, J
    BRAIN AND LANGUAGE, 2006, 96 (01) : 14 - 36
  • [49] Parallel Algorithms for Large-Scale Nanoelectronics Simulations Using NESSIE
    Eric Polizzi
    Ahmed Sameh
    Journal of Computational Electronics, 2004, 3 : 363 - 366
  • [50] Parallel Algorithms for Large-Scale Nanoelectronics Simulations Using NESSIE
    Polizzi, Eric
    Sameh, Ahmed
    JOURNAL OF COMPUTATIONAL ELECTRONICS, 2004, 3 (3-4) : 363 - 366