A parallel computing and mathematical method optimization of CNN network convolution

被引:3
|
作者
Su, Yue [1 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Liaoning, Peoples R China
关键词
Keywords-Convolutional Neural Network; Deep Learning; Parallelization; Communication; Overlapping; NEURAL-NETWORK;
D O I
10.1016/j.micpro.2020.103571
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Training convolutional neural network (CNN) is a compute-intensive task in parallel tolerance has become a complete training is very important. There are two obstacles in the distributed memory computing environment to develop a scalable parallel CNN. It depends on a small volume across the two model parameters shown adjacent to the height data. The other presents the maximum overlapping parallel computing inter-process communication, large amounts of data over a communication channel to go. They will be transferred to the calculation. Replication by using threads on each compute node to initiate communication available after gradient is achieved. Reverse spread output data is generated in each stage of the model layer, and data communication may be performed in parallel with the calculation of other layers. To impact the replication study's efficiency and scalability, evaluated the model structure and optimization of various mathematical methods. When using the image VGGnet model training dataset, use 256 and 512, respectively, small batch size to achieve speedup computing nodes.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] PARALLEL COMPUTING OPTIMIZATION IN THE APOLLO DOMAIN NETWORK
    PEKERGIN, MF
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1992, 18 (04) : 296 - 303
  • [2] A Parallel Optimization of the Fast Algorithm of Convolution Neural Network on CPU
    Huang, JiaHao
    Wang, Tiejun
    Zhu, Xuhui
    Wei, Min
    Wu, Tao
    Wu, Xi
    Huang, Min
    2018 10TH INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA), 2018, : 5 - 9
  • [3] Parallel Convolutional Neural Network (CNN) Accelerators Based on Stochastic Computing
    Zhang, Yawen
    Zhang, Xinyue
    Song, Jiahao
    Wang, Yuan
    Huang, Ru
    Wang, Runsheng
    PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2019), 2019, : 19 - 24
  • [4] Parallel evolutionary computing using a cluster for mathematical function optimization
    Valdez, Fevrier
    Melin, Patricia
    NAFIPS 2007 - 2007 ANNUAL MEETING OF THE NORTH AMERICAN FUZZY INFORMATION PROCESSING SOCIETY, 2007, : 598 - +
  • [5] Design Method for an LUT Network-Based CNN with a Sparse Local Convolution
    Soga, Naoto
    Nakahara, Hiroki
    2020 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2020), 2020, : 294 - 295
  • [6] Breast Cancer Diagnosis Using Histopathology and Convolution Neural Network CNN Method
    Tayel, Mazhar B.
    Mokhtar, Mohamed-Amr A.
    Kishk, Ahmed F.
    INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING AND COMMUNICATIONS, ICICC 2022, VOL 1, 2023, 473 : 585 - 600
  • [7] Parallel computing in optimization
    Rayward-Smith, VJ
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 1998, 49 (07) : 770 - 771
  • [8] Optimization with parallel computing
    Kundu, S
    VECTOR AND PARALLEL PROCESSING - VECPAR 2000, 2001, 1981 : 221 - 229
  • [9] Grey wolf optimization (GWO) with the convolution neural network (CNN)-based pattern recognition system
    Jamshed, Aatif
    Mallick, Bhawna
    Bharti, Rajendra Kumar
    IMAGING SCIENCE JOURNAL, 2022, 70 (04): : 238 - 252
  • [10] Network and parallel computing
    Li, Keqiu
    Shen, Yanming
    Guo, Minyi
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2009, 24 (03): : 131 - 132