MASSIVELY PARALLEL ARCHITECTURES FOR LARGE-SCALE NEURAL NETWORK SIMULATIONS

被引:13
|
作者
FUJIMOTO, Y
FUKUDA, N
AKABANE, T
机构
[1] SHARP CO LTD,INTEGRATED CIRCUITS GRP,CTR IC DEV,RES STAFF,TENRI,NARA 632,JAPAN
[2] SHARP CO LTD,CORP RES & DEV GRP,CTR INFORMAT SYST RES & DEV,TENRI,NARA 632,JAPAN
来源
关键词
D O I
10.1109/72.165590
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A toroidal lattice architecture (TLA) and a planar lattice architecture (PIA) as massively parallel architectures of neurocomputers for large scale neural network simulations are proposed. The performances of these architectures are almost proportional to the number of node processors and they adopt the most efficient two-dimensional processor connections to be implemented by the wafer scale integration (WSI) technology to date. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large scale neural network simulations. Furthermore, these architectures will have an even greater expandability of parallelism and exhibit great flexibility for the various configurations of neural networks and varieties of neuron models. First we define the general neuron model that is the basis of these massively parallel architectures. Then, we take a multilayer perceptron (MLP) as a typical example of neural networks and describe the simulation of the MLP using error back propagation learning algorithms on virtual processors (VP's) with the TLA and the PLA. Then, the mapping from the VP's to physical node processors with the same TLA and PLA is presented. This mapping is done by row and column partitions. At the same time, the row and column permutations are carried out for node processor load balancing. The mapping algorithm for the load balancing is given. An equation to estimate the performance of these architectures is also presented. Finally, we describe implementation of the TLA with transputers including a parallel processor configuration, load balance algorithm, and evaluation of its performance. We have implemented a Hopfield neural network and a MLP, and applied them to the traveling salesman problem (TSP) and the identity mapping (IM), respectively. The TLA neurocomputer has achieved 2 MCPS in a feedforward network and 600 KCUPS in a back propagation network using 16 transputers. Actual proof that its performance increases almost in proportion to the number of node processors is given.
引用
收藏
页码:876 / 888
页数:13
相关论文
共 50 条
  • [21] Large Sensor Network Architectures for Monitoring Large-Scale Structures
    Zhang, D. C.
    Narayanan, V.
    Zheng, X. B.
    Chung, H.
    Banerjee, S.
    Beard, S.
    Li, I.
    STRUCTURAL HEALTH MONITORING 2011: CONDITION-BASED MAINTENANCE AND INTELLIGENT STRUCTURES, VOL 1, 2011, : 421 - 431
  • [22] Large-scale Exploration of Neural Relation Classification Architectures
    Le, Hoang-Quynh
    Can, Duy-Cat
    Vu, Sinh T.
    Dang, Thanh Hai
    Pilehvar, Mohammad Taher
    Collier, Nigel
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 2266 - 2277
  • [23] Large-Scale Nonlinear Device-Level Power Electronic Circuit Simulation on Massively Parallel Graphics Processing Architectures
    Yan, Shenhao
    Zhou, Zhiyin
    Dinavahi, Venkata
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2018, 33 (06) : 4660 - 4678
  • [24] Learning from large-scale neural simulations
    Serban, Maria
    VITAL MODELS: THE MAKING AND USE OF MODELS IN THE BRAIN SCIENCES, 2017, 233 : 129 - 148
  • [25] Massively parallel software rendering for visualizing large-scale data sets
    Ma, KL
    Parker, S
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2001, 21 (04) : 72 - 83
  • [26] Large-scale finite element fluid analysis by massively parallel processors
    Yagawa, G
    Nakabayashi, Y
    Okuda, H
    PARALLEL COMPUTING, 1997, 23 (09) : 1365 - 1377
  • [27] Advanced manufacturing of large-scale composite structures: process modeling, manufacturing simulations and massively parallel computing platforms
    Mohan, RV
    Tamma, KK
    Shires, DR
    Mark, A
    ADVANCES IN ENGINEERING SOFTWARE, 1998, 29 (3-6) : 249 - 263
  • [28] Large-Scale Whale Call Classification Using Deep Convolutional Neural Network Architectures
    Wang, Dezhi
    Zhang, Lilun
    Lu, Zengquan
    Xu, Kele
    2018 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING (ICSPCC), 2018,
  • [29] Neural network potentials for large-scale molecular dynamics simulations of condensed systems
    Behler, Joerg
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2014, 248
  • [30] Large-Scale Hydrodynamic Brownian Simulations on Multicore and Manycore Architectures
    Liu, Xing
    Chow, Edmond
    2014 IEEE 28TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, 2014,