FullReuse: A Novel ReRAM-based CNN Accelerator Reusing Data in Multiple Levels

被引:0
|
作者
Luo, Changhang [1 ]
Diao, Jietao [1 ]
Chen, Changlin [1 ]
机构
[1] Natl Univ Def Technol, Res Ctr Intelligent Devices Circuits & Syst, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
ReRAM; convolutional neural networks; hardware accelerator; data reuse;
D O I
10.1109/icicm50929.2020.9292144
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The processing of Convolutional Neural Network (CNN) involves a large amount of data movements and thus usually causes significant latency and energy consumption. Resistive Random Access Memory (ReRAM) based CNN accelerators with Processing-In-Memory (PIM) architecture are deemed as a promising solution to improve the energy efficiency. However, the weight mapping methods and the corresponding dataflow in state of the art accelerators are not yet well designed to fully explore the possible data reuse in the CNN inference. In this paper, we propose a new ReRAM based PIM architecture named FullReuse in which all types of data reuse are realized with novel simple hardware circuit. The latency and energy consumption in the buffer and interconnect for data movements are minimized. Experiments with the VGG-network on the NeuroSim platform shows that the FullReuse can achieve up to 1.6 times improvement in the processing speed when compare with state of the art accelerators with comparable power efficiency and 14% area overhead.
引用
收藏
页码:177 / 183
页数:7
相关论文
共 50 条
  • [21] RePAIR: A ReRAM-based Processing-in-Memory Accelerator for Indel Realignment
    Wu, Ting
    Nien, Chin-Fu
    Chou, Kuang-Chao
    Cheng, Hsiang-Yun
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 400 - 405
  • [22] ReQUSA: a novel ReRAM-based hardware accelerator architecture for high-speed quantum computer simulation
    Lee, Sanghyeon
    Hour, Leanghok
    Kim, Yongtae
    Han, Youngsun
    PHYSICA SCRIPTA, 2024, 99 (03)
  • [23] MAX2: An ReRAM-Based Neural Network Accelerator That Maximizes Data Reuse and Area Utilization
    Mao, Manqing
    Peng, Xiaochen
    Liu, Rui
    Li, Jingtao
    Yu, Shimeng
    Chakrabarti, Chaitali
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2019, 9 (02) : 398 - 410
  • [24] ReCSA: a dedicated sort accelerator using ReRAM-based content addressable memory
    Li, Huize
    Jin, Hai
    Zheng, Long
    Huang, Yu
    Liao, Xiaofei
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (02)
  • [25] AUTO-PRUNE: Automated DNN Pruning and Mapping for ReRAM-Based Accelerator
    Yang, Siling
    Chen, Weijian
    Zhang, Xuechen
    He, Shuibing
    Yin, Yanlong
    Sun, Xian-He
    PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, : 304 - 315
  • [26] An Energy-Efficient Mixed-Bit CNN Accelerator With Column Parallel Readout for ReRAM-Based In-Memory Computing
    Liu, Dingbang
    Zhou, Haoxiang
    Mao, Wei
    Liu, Jun
    Han, Yuliang
    Man, Changhai
    Wu, Qiuping
    Guo, Zhiru
    Huang, Mingqiang
    Luo, Shaobo
    Lv, Mingsong
    Chen, Quan
    Yu, Hao
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2022, 12 (04) : 821 - 834
  • [27] An Energy-Efficient Mixed-Bit ReRAM-based Computing-in-Memory CNN Accelerator with Fully Parallel Readout
    Liu, Dingbang
    Mao, Wei
    Zhou, Haoxiang
    Liu, Jun
    Wu, Qiuping
    Hong, Haigiao
    Yu, Hao
    2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, : 515 - 519
  • [28] REC: REtime Convolutional layers in energy harvesting ReRAM-based CNN accelerators
    Zhou, Kunyu
    Qiu, Keni
    PROCEEDINGS OF THE 19TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS 2022 (CF 2022), 2022, : 185 - 188
  • [29] ReCSA: a dedicated sort accelerator using ReRAM-based content addressable memory
    Huize Li
    Hai Jin
    Long Zheng
    Yu Huang
    Xiaofei Liao
    Frontiers of Computer Science, 2023, 17
  • [30] A Reduced Architecture for ReRAM-Based Neural Network Accelerator and Its Software Stack
    Ji, Yu
    Liu, Zixin
    Zhang, Youhui
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (03) : 316 - 331