Training Many Neural Networks in Parallel via Back-Propagation

被引:5
|
作者
Cruz-Lopez, Javier A. [1 ]
Boyer, Vincent [1 ]
El-Baz, Didier [2 ]
机构
[1] Univ Autonoma Nuevo Leon, Grad Program Syst Engn, Monterrey 66451, Mexico
[2] Univ Toulouse, LAAS CNRS, Toulouse, France
关键词
Product Demand Forecasting; Neural Networks; Back-Propagation; GPU; Multiprocessing; IMPLEMENTATION;
D O I
10.1109/IPDPSW.2017.72
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents two parallel implementations of the Back-propagation algorithm, a widely used approach for Artificial Neural Networks (ANNs) training. These implementations permit one to increase the number of ANNs trained simultaneously taking advantage of the thread-level massive parallelism of GPUs and multi-core architecture of modern CPUs, respectively. Computational experiments are carried out with time series taken from the product demand of a Mexican brewery company; the goal is to optimize delivery of products. We consider also time series of the M3-competition benchmark. The results obtained show the benefits of training several ANNs in parallel compared to other forecasting methods used in the competition. Indeed, training several ANNs in parallel yields to a better fitting of the weights of the network and allows to train in a short time many ANNs for different time series.
引用
收藏
页码:501 / 509
页数:9
相关论文
共 50 条