Energy-Efficient and High-Performance NoC Architecture and Mapping Solution for Deep Neural Networks

被引:16
|
作者
Reza, Md Farhadur [1 ]
Ampadu, Paul [1 ]
机构
[1] Virginia Polytech Inst & State Univ, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
关键词
Deep Neural Network (DNN); Mapping; Network-on-Chip (NoC); RESOURCE CO-ALLOCATION; ON-CHIP; TOPOLOGIES;
D O I
10.1145/3313231.3352377
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
With the advancement and miniaturization of transistor technology, hundreds of cores can be integrated on a single chip. Network-on-Chips (NoCs) are the de facto on-chip communication fabrics for multi/many core systems because of their benefits over the traditional bus in terms of scalability, parallelism, and power efficiency [20]. Because of these properties of NoC, communication architecture for different layers of a deep neural network can be developed using NoC. However, traditional NoC architectures and strategies may not be suitable for running deep neural networks because of the different types of communication patterns (e.g. one-to-many and many-to-one communication between layers and zero communication within a single layer) in neural networks. Furthermore, because of the different communication patterns, computations of the different layers of a neural network need to be mapped in a way that reduces communication bottleneck in NoC. Therefore, we explore different NoC architectures and mapping solutions for deep neural networks, and then propose an efficient concentrated mesh NoC architecture and a load-balanced mapping solution (including mathematical model) for accelerating deep neural networks. We also present preliminary results to show the effectiveness of our proposed approaches to accelerate deep neural networks while achieving energy-efficient and high-performance NoC.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] High-performance and energy-efficient fault-tolerance core mapping in NoC
    Beechu, Naresh Kumar Reddy
    Harishchandra, Vasantha Moodabettu
    Balachandra, Nithin Kumar Yernad
    [J]. SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2017, 16 : 1 - 10
  • [2] High-Performance Energy-Efficient NoC Fabrics: Evolution and Future Challenges
    Anders, Mark A.
    [J]. 2014 EIGHTH IEEE/ACM INTERNATIONAL SYMPOSIUM ON NETWORKS-ON-CHIP (NOCS), 2014, : I - I
  • [3] DRAMA: An Approximate DRAM Architecture for High-performance and Energy-efficient Deep Training System
    Duy-Thanh Nguyen
    Min, Chang-Hong
    Nhut-Minh Ho
    Chang, Ik-Joon
    [J]. 2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [4] A High-Performance, Energy-Efficient Modular DMA Engine Architecture
    Benz, Thomas
    Rogenmoser, Michael
    Scheffler, Paul
    Riedel, Samuel
    Ottaviano, Alessandro
    Kurth, Andreas
    Hoefler, Torsten
    Benini, Luca
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (01) : 263 - 277
  • [5] SLIT: An Energy-Efficient Reconfigurable Hardware Architecture for Deep Convolutional Neural Networks
    Tran, Thi Diem
    Nakashima, Yasuhiko
    [J]. IEICE TRANSACTIONS ON ELECTRONICS, 2021, E104C (07) : 319 - 329
  • [6] Energy-Efficient and High-Performance Software Architecture for Storage Class Memory
    Baek, Seungjae
    Choi, Jongmoo
    Lee, Donghee
    Noh, Sam H.
    [J]. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2013, 12 (03)
  • [7] MuDBN: An Energy-Efficient and High-Performance Multi-FPGA Accelerator for Deep Belief Networks
    Cheng, Yuming
    Wang, Chao
    Zhao, Yangyang
    Chen, Xianglan
    Zhou, Xuehai
    Li, Xi
    [J]. PROCEEDINGS OF THE 2018 GREAT LAKES SYMPOSIUM ON VLSI (GLSVLSI'18), 2018, : 435 - 438
  • [8] Mapping Model and Heuristics for Accelerating Deep Neural Networks and for Energy-Efficient Networks-on-Chip
    Reza, Md Farhadur
    Yeazel, Alex
    [J]. SOUTHEASTCON 2024, 2024, : 119 - 126
  • [9] RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks
    Ankit, Aayush
    Sengupta, Abhronil
    Panda, Priyadarshini
    Roy, Kaushik
    [J]. PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
  • [10] EDEN: Enabling Energy-Efficient, High-Performance Deep Neural Network Inference Using Approximate DRAM
    Koppula, Skanda
    Orosa, Lois
    Yaglikci, A. Giray
    Azizi, Roknoddin
    Shahroodi, Taha
    Kanellopoulos, Konstantinos
    Mutlu, Onur
    [J]. MICRO'52: THE 52ND ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, 2019, : 166 - 181