Parallelism in Deep Learning Accelerators

被引:0
|
作者
Song, Linghao [1 ]
Chen, Fan [1 ]
Chen, Yiran [1 ]
Li, Hai Helen [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning is the core of artificial intelligence and it achieves state-of-the-art in a wide range of applications. The intensity of computation and data in deep learning processing poses significant challenges to the conventional computing platforms. Thus, specialized accelerator architectures are proposed for the acceleration of deep learning In this paper, we classify the design space of current deep learning accelerators into three levels, (1) processing engine, (2) memory and (3) accelerator, and present a constructive view from a perspective of parallelism in the three levels.
引用
收藏
页码:645 / 650
页数:6
相关论文
共 50 条
  • [1] Exploring Parallelism in the Deep Learning Arena
    Jamali, Mohsin M.
    2018 IEEE 61ST INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2018, : 170 - 173
  • [2] Scheduling Dynamic Parallelism on Accelerators
    Blagojevic, Filip
    Iancu, Costin
    Yelick, Katherine
    Curtis-Maury, Matthew
    Nikolopoulos, Dimitrios S.
    Rose, Benjamin
    CF'09: CONFERENCE ON COMPUTING FRONTIERS & WORKSHOPS, 2009, : 161 - 170
  • [3] Ergodic Approximate Deep Learning Accelerators
    van Lijssel, Tim
    Balatsoukas-Stimming, Alexios
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 734 - 738
  • [4] Hardware Accelerators for Deep Reinforcement Learning
    Mishra, Vinod K.
    Basu, Kanad
    Arunachalam, Ayush
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [5] AdequateDL: Approximating Deep Learning Accelerators
    Sentieys, Olivier
    Filip, Silviu
    Briand, David
    Novo, David
    Dupuis, Etienne
    O'Connor, Ian
    Bosio, Alberto
    2021 24TH INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS (DDECS), 2021, : 37 - 40
  • [6] Model Parallelism optimization with deep reinforcement learning
    Mirhoseini, Azalia
    2018 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2018), 2018, : 855 - 855
  • [7] Exploiting Parallelism Opportunities with Deep Learning Frameworks
    Wang, Yu Emma
    Wu, Carole-Jean
    Wang, Xiaodong
    Hazelwood, Kim
    Brooks, David
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2021, 18 (01)
  • [8] On the Acceleration of Deep Learning Model Parallelism with Staleness
    Xu, An
    Huo, Zhouyuan
    Huang, Heng
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2085 - 2094
  • [9] Chronos: Efficient Speculative Parallelism for Accelerators
    Abeydeera, Maleen
    Sanchez, Daniel
    TWENTY-FIFTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXV), 2020, : 1247 - 1262
  • [10] Exploiting deep learning accelerators for neuromorphic workloads
    Sun, Pao-Sheng Vincent
    Titterton, Alexander
    Gopiani, Anjlee
    Santos, Tim
    Basu, Arindam
    Lu, Wei D.
    Eshraghian, Jason K.
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2024, 4 (01):