NEURAL NETWORK IMPLEMENTATIONS AND SPEED-UP ON MASSIVELY PARALLEL MACHINES

被引:0
|
作者
AZEMABARAC, ME [1 ]
REFENES, AN [1 ]
机构
[1] UNIV LONDON UNIV COLL,DEPT COMP SCI,LONDON WC1E 6BT,ENGLAND
来源
MICROPROCESSING AND MICROPROGRAMMING | 1992年 / 35卷 / 1-5期
关键词
D O I
10.1016/0165-6074(92)90398-Q
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper investigates large scale learning algorithms and their implementation on Massively Parallel machines. The system prototype described in this paper is part of an integrated environment for developing neural network applications, consisting of: i) a library of neural models and associated tools and ii) a mapping system responsible for providing generic and efficient implementations on a spectrum of parallel machines ranging from coarse grain MIMD to fine grain, Massively Parallel SIMD machines. We also describe the implementation of standard learning algorithms onto the Distributed Array of Processors (DAP) and show that a speedup of 50 is obtained for a typical pattern recognition application.
引用
收藏
页码:747 / 754
页数:8
相关论文
共 50 条