Energy-Efficient and High-Performance NoC Architecture and Mapping Solution for Deep Neural Networks

被引:16
|
作者
Reza, Md Farhadur [1 ]
Ampadu, Paul [1 ]
机构
[1] Virginia Polytech Inst & State Univ, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
关键词
Deep Neural Network (DNN); Mapping; Network-on-Chip (NoC); RESOURCE CO-ALLOCATION; ON-CHIP; TOPOLOGIES;
D O I
10.1145/3313231.3352377
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
With the advancement and miniaturization of transistor technology, hundreds of cores can be integrated on a single chip. Network-on-Chips (NoCs) are the de facto on-chip communication fabrics for multi/many core systems because of their benefits over the traditional bus in terms of scalability, parallelism, and power efficiency [20]. Because of these properties of NoC, communication architecture for different layers of a deep neural network can be developed using NoC. However, traditional NoC architectures and strategies may not be suitable for running deep neural networks because of the different types of communication patterns (e.g. one-to-many and many-to-one communication between layers and zero communication within a single layer) in neural networks. Furthermore, because of the different communication patterns, computations of the different layers of a neural network need to be mapped in a way that reduces communication bottleneck in NoC. Therefore, we explore different NoC architectures and mapping solutions for deep neural networks, and then propose an efficient concentrated mesh NoC architecture and a load-balanced mapping solution (including mathematical model) for accelerating deep neural networks. We also present preliminary results to show the effectiveness of our proposed approaches to accelerate deep neural networks while achieving energy-efficient and high-performance NoC.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Pruning Deep Neural Networks for Green Energy-Efficient Models: A Survey
    Tmamna, Jihene
    Ben Ayed, Emna
    Fourati, Rahma
    Gogate, Mandar
    Arslan, Tughrul
    Hussain, Amir
    Ayed, Mounir Ben
    [J]. COGNITIVE COMPUTATION, 2024,
  • [42] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Yongqiang Cao
    Yang Chen
    Deepak Khosla
    [J]. International Journal of Computer Vision, 2015, 113 : 54 - 66
  • [43] A Pipelined Energy-efficient Hardware Accelaration for Deep Convolutional Neural Networks
    Alaeddine, Hmidi
    Jihene, Malek
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON DESIGN & TEST OF INTEGRATED MICRO & NANO-SYSTEMS (DTS), 2019,
  • [44] Sectored DRAM: A Practical Energy-Efficient and High-Performance Fine-Grained DRAM Architecture
    Olgun, Ataberk
    Bostanci, F. Nisa
    de Oliveira Junior, Geraldo Francisco
    Tugrul, Yahya Can
    Ul Bera, Rah
    Yaglikci, Abdullah Giray
    Hassan, Hasan
    Ergin, Oguz
    Mutlu, Onur
    [J]. ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2024, 21 (03)
  • [45] Galaxy: A High-Performance Energy-Efficient Multi-Chip Architecture Using Photonic Interconnects
    Demir, Yigit
    Pan, Yan
    Song, Seukwoo
    Hardavellas, Nikos
    Kim, John
    Memik, Gokhan
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, (ICS'14), 2014, : 303 - 312
  • [46] Energy-Efficient Architecture for Neural Spikes Acquisition
    Osipov, Dmitry
    Paul, Steffen
    Stemmann, Heiko
    Kreiter, Andreas K.
    [J]. 2018 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS): ADVANCED SYSTEMS FOR ENHANCING HUMAN HEALTH, 2018, : 439 - 442
  • [47] An energy-efficient coarse grained spatial architecture for convolutional neural networks AlexNet
    Zhao, Boya
    Wang, Mingjiang
    Liu, Ming
    [J]. IEICE ELECTRONICS EXPRESS, 2017, 14 (15):
  • [48] Energy-Efficient, High-Performance, Highly-Compressed Deep Neural Network Design using Block-Circulant Matrices
    Liao, Siyu
    Li, Zhe
    Lin, Xue
    Qiu, Qinru
    Wang, Yanzhi
    Yuan, Bo
    [J]. 2017 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2017, : 458 - 465
  • [49] A Parallel RRAM Synaptic Array Architecture for Energy-Efficient Recurrent Neural Networks
    Yin, Shihui
    Sun, Xiaoyu
    Yu, Shimeng
    Seo, Jae-sun
    Chakrabarti, Chaitali
    [J]. PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2018, : 13 - 18
  • [50] A Heterogeneous and Reconfigurable Embedded Architecture for Energy-Efficient Execution of Convolutional Neural Networks
    Luebeck, Konstantin
    Bringmann, Oliver
    [J]. ARCHITECTURE OF COMPUTING SYSTEMS - ARCS 2019, 2019, 11479 : 267 - 280