Parallel Approaches to Accelerate Deep Learning Processes Using Heterogeneous Computing

被引:0
|
作者
Nasimov, Rashid [1 ]
Rakhimov, Mekhriddin [2 ]
Javliev, Shakhzod [2 ]
Abdullaeva, Malika [2 ]
机构
[1] Tashkent State Univ Econ, Tashkent, Uzbekistan
[2] Tashkent Univ Informat Technol, Tashkent, Uzbekistan
关键词
artificial intelligence; deep learning; heterogeneous computing systems; OpenCL; CUDA technology; parallel processing;
D O I
10.1007/978-3-031-60997-8_4
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the current context, the rise of artificial intelligence (AI) emphasizes the need to expedite training procedures, especially when dealing with extensive data, particularly in deep learning. This research primarily aims to significantly improve the time efficiency of deep learning processes. While it's widely recognized that graphics processing units (GPUs) offer notably faster performance for specific data tasks compared to a computer's central processing unit (CPU), this study explores heterogeneous computing systems for situations where GPUs are unavailable. Here, we investigate strategies to achieve enhanced processing speed using advanced technologies. The study concludes by presenting comparative results from various approaches and providing important recommendations for future endeavors.
引用
收藏
页码:32 / 41
页数:10
相关论文
共 50 条
  • [1] Research on Parallel Deep Learning for Heterogeneous Computing Architecture
    Xia, Kaijian
    Hu, Tao
    Si, Wen
    JOURNAL OF GRID COMPUTING, 2020, 18 (02) : 177 - 179
  • [2] Research on Parallel Deep Learning for Heterogeneous Computing Architecture
    Kaijian Xia
    Tao Hu
    Wen Si
    Journal of Grid Computing, 2020, 18 : 177 - 179
  • [3] Accelerate Scientific Deep Learning Models on Heterogeneous Computing Platform with FPGA
    Jiang, Chao
    Ojika, David
    Vallecorsa, Sofia
    Kurth, Thorsten
    Prabhat
    Patel, Bhavesh
    Lam, Herman
    24TH INTERNATIONAL CONFERENCE ON COMPUTING IN HIGH ENERGY AND NUCLEAR PHYSICS (CHEP 2019), 2020, 245
  • [4] Analysis of Parallel Computing Strategies to Accelerate Ultrasound Imaging Processes
    Romero-Laorden, D.
    Villazon-Terrazas, J.
    Martinez-Graullera, O.
    Ibanez, A.
    Parrilla, M.
    Penas, M. Santos
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2016, 27 (12) : 3429 - 3440
  • [5] Elastic Deep Learning Using Knowledge Distillation with Heterogeneous Computing Resources
    Dong, Daxiang
    Liu, Ji
    Wang, Xi
    Gong, Weibao
    Qin, An
    Li, Xingjian
    Yu, Dianhai
    Valduriez, Patrick
    Dou, Dejing
    EURO-PAR 2021: PARALLEL PROCESSING WORKSHOPS, 2022, 13098 : 116 - 128
  • [6] Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
    Georgiev, Dobromir
    Gurov, Todor
    LARGE-SCALE SCIENTIFIC COMPUTING (LSSC 2019), 2020, 11958 : 220 - 227
  • [7] Acceleration for Deep Reinforcement Learning using Parallel and Distributed Computing: A Survey
    Liu, Zhihong
    Xu, Xin
    Qiao, Peng
    Li, Dongsheng
    ACM COMPUTING SURVEYS, 2025, 57 (04)
  • [8] Federated Deep Learning for Heterogeneous Edge Computing
    Ahmed, Khandaker Mamun
    Imteaj, Ahmed
    Amini, M. Hadi
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 1146 - 1152
  • [9] Deep Learning Approaches for Pathological Voice Detection Using Heterogeneous Parameters
    Lee, JiYeoun
    Choi, Hee-Jin
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (08) : 1920 - 1923
  • [10] Parallel Static Learning Toward Heterogeneous Computing Architectures
    Lin, Xiaoze
    Lai, Liyang
    Li, Huawei
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (03) : 983 - 993