Analysis of on-chip communication properties in accelerator architectures for Deep Neural Networks

被引:7
|
作者
Krichene, Hana [1 ]
Philippe, Jean-Marc [1 ]
机构
[1] Univ Paris Saclay, CEA, List, F-91120 Palaiseau, France
关键词
Network-on-Chip; Deep Neural Networks; Artificial Intelligence; CNN accelerators;
D O I
10.1145/3479876.3481588
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) algorithms are expected to be core components of next-generation applications. These high performance sensing and recognition algorithms are key enabling technologies of smarter systems that make appropriate decisions about their environment. The integration of these compute-intensive and memory-hungry algorithms into embedded systems will require the use of specific energy-efficient hardware accelerators. The intrinsic parallelism of DNNs algorithms allows for the use of a large number of small processing elements, and the tight exploitation of data reuse can significantly reduce power consumption. To meet these features, many dataflow models and on-chip communication proposals have been studied in recent years. This paper proposes a comprehensive study of on-chip communication properties based on the analysis of application-specific features, such as data reuse and communication models, as well as the results of mapping these applications to architectures of different sizes. In addition, the influence of mechanisms such as broadcast and multicast on performance and energy efficiency is analyzed. This study leads to the definition of overarching features to be integrated into next-generation on-chip communication infrastructures for CNN accelerators.
引用
收藏
页码:9 / 14
页数:6
相关论文
共 50 条
  • [1] On-Chip Memory Optimization of High Efficiency Accelerator for Deep Convolutional Neural Networks
    Lai, Tzu-Yi
    Chen, Kuan-Hung
    [J]. 2018 INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2018, : 82 - 83
  • [2] A Survey of Accelerator Architectures for Deep Neural Networks
    Chen, Yiran
    Xie, Yuan
    Song, Linghao
    Chen, Fan
    Tang, Tianqi
    [J]. ENGINEERING, 2020, 6 (03) : 264 - 274
  • [3] On-chip Training of Memristor Based Deep Neural Networks
    Hasan, Raqibul
    Taha, Tarek M.
    Yakopcic, Chris
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3527 - 3534
  • [4] Ziksa: On-Chip Learning Accelerator with Memristor Crossbars for Multilevel Neural Networks
    Zyarah, Abdullah M.
    Soures, Nicholas
    Hays, Lydia
    Jacobs-Gedrim, Robin B.
    Agarwal, Sapan
    Marinella, Matthew
    Kudithipudi, Dhireesha
    [J]. 2017 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2017,
  • [5] On-chip communication architectures for reconfigurable system-on-chip
    Lee, AS
    Bergmann, NW
    [J]. 2003 IEEE INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), PROCEEDINGS, 2003, : 332 - 335
  • [6] Power analysis of system-level on-chip communication architectures
    Lahiri, K
    Raghunathan, A
    [J]. INTERNATIONAL CONFERENCE ON HARDWARE/SOFTWARE CODESIGN AND SYSTEM SYNTHESIS, 2004, : 236 - 241
  • [7] On-chip Interconnection Network for Accelerator-Rich Architectures
    Cong, Jason
    Gill, Michael
    Hao, Yuchen
    Reinman, Glenn
    Yuan, Bo
    [J]. 2015 52ND ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2015,
  • [8] A hierarchical modeling framework for on-chip communication architectures
    Zhu, XP
    Malik, S
    [J]. IEEE/ACM INTERNATIONAL CONFERENCE ON CAD-02, DIGEST OF TECHNICAL PAPERS, 2002, : 663 - 670
  • [9] System-level performance analysis for designing on-chip communication architectures
    Lahiri, K
    Raghunathan, A
    Dey, S
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2001, 20 (06) : 768 - 783
  • [10] How Neural Architectures Affect Deep Learning for Communication Networks?
    Shen, Yifei
    Zhang, Jun
    Letaief, Khaled B.
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 389 - 394