Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

被引:4
|
作者
Fe, Jorge D. [1 ]
Aliaga, Ramon J. [1 ]
Gadea-Girones, Rafael [1 ]
机构
[1] Univ Politecn Valencia, Inst Mol Imaging Technol, Valencia 46022, Spain
来源
JOURNAL OF SUPERCOMPUTING | 2015年 / 71卷 / 08期
关键词
Evolutionary computation; Embedded system; FPGA; Neural networks; GENETIC ALGORITHM; PARAMETERS;
D O I
10.1007/s11227-015-1419-7
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the optimization of artificial neural networks (ANNs) via evolutionary algorithms and the implementation of the necessary training for the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism, whereas hardware realizations usually rely on optimizations that reduce the range of applicable network topologies, or they attempt to increase processing efficiency by means of low-precision data representation. This paper presents, first of all, a study that shows the need of heterogeneous platform (CPU-GPU-FPGA) to accelerate the optimization of ANNs using genetic algorithms and, secondly, an implementation of a platform based on embedded systems with hardware accelerators implemented in Field Pro-grammable Gate Array (FPGA). The implementation of the individuals on a remote low-cost Altera FPGA allowed us to obtain a 3x-4x acceleration compared with a 2.83 GHz Intel Xeon Quad-Core and 6x-7x compared with a 2.2 GHz AMD Opteron Quad-Core 2354.
引用
收藏
页码:2944 / 2962
页数:19
相关论文
共 50 条
  • [1] Evolutionary optimization of neural networks with heterogeneous computation: study and implementation
    Jorge D. Fe
    Ramón J. Aliaga
    Rafael Gadea-Gironés
    [J]. The Journal of Supercomputing, 2015, 71 : 2944 - 2962
  • [2] Boosted Neural Networks in Evolutionary Computation
    Holena, Martin
    Linke, David
    Steinfeldt, Norbert
    [J]. NEURAL INFORMATION PROCESSING, PT 2, PROCEEDINGS, 2009, 5864 : 131 - +
  • [3] Operator adaptation in evolutionary computation and its application to structure optimization of neural networks
    Igel, C
    Kreutz, M
    [J]. NEUROCOMPUTING, 2003, 55 (1-2) : 347 - 361
  • [4] Development of evolutionary computation and evolutionary neural networks with massively parallel computing
    Song, Aiguo
    Lu, Jiren
    [J]. Xitong Fangzhen Xuebao/Acta Simulata Systematica Sinica, 1998, 10 (01): : 14 - 19
  • [5] Adaptable multiple neural networks using evolutionary computation
    Sohn, SW
    Dagli, CH
    [J]. APPLICATIONS AND SCIENCE OF COMPUTATIONAL INTELLIGENCE V, 2002, 4739 : 141 - 149
  • [6] Structure optimization of neural networks for evolutionary design optimization
    Hüsken, M
    Jin, Y
    Sendhoff, B
    [J]. SOFT COMPUTING, 2005, 9 (01) : 21 - 28
  • [7] Structure optimization of neural networks for evolutionary design optimization
    M. Hüsken
    Y. Jin
    B. Sendhoff
    [J]. Soft Computing, 2005, 9 : 21 - 28
  • [8] Optimization with neural networks trained by evolutionary algorithms
    Velazco, MI
    Lyra, C
    [J]. PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1516 - 1521
  • [9] Evolutionary optimization of neural networks for fire recognition
    Kandil, Magy
    Shahin, Samir
    Atiya, Amir
    Fayek, Magda
    [J]. 2006 International Conference on Computer Engineering & Systems, 2006, : 431 - 435
  • [10] Evolutionary Computation Paradigm to Determine Deep Neural Networks Architectures
    Ivanescu, R. C.
    Belciug, S.
    Nascu, A.
    Serbanescu, M. S.
    Iliescu, D. G.
    [J]. INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2022, 17 (05)