DRAM-Based Processor for Deep Neural Networks Without SRAM Cache

被引:0
|
作者
Tam, Eugene [1 ]
Jiang, Shenfei [1 ]
Duan, Paul [1 ]
Meng, Shawn [1 ]
Pan, Yue [1 ]
Huang, Cayden [1 ]
Han, Yi [1 ]
Xie, Jacke [1 ]
Cui, Yuanjun [1 ]
Yu, Jinsong [1 ]
Lu, Minggui [1 ]
机构
[1] IC League Inc, Haining, Peoples R China
来源
关键词
Neural network; Artificial intelligence; Processor; Deep learning;
D O I
10.1007/978-3-030-80126-7_52
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern computing architectures use cache memory as the buffer between high speed computing units and low latency main memory. Higher capacity caches are thought to be critical for deep neural network processors, which handle large amounts of data. However, as cache memory capacity increases, it occupies large die area that can otherwise be used for computing units. This is the inherent trade off between memory capacity and performance. In this work, we present a deep neural network processing chip, with a near-memory computing architecture. We eliminate the SRAM cache and use DRAM only as on-chip memory, delivering high performance and high memory capacity.
引用
收藏
页码:743 / 753
页数:11
相关论文
共 50 条
  • [1] DRAM-Based Authentication Using Deep Convolutional Neural Networks
    Yue, Michael
    Karimian, Nima
    Yan, Wei
    Anagnostopoulos, Nikolaos Athanasios
    Tehranipoor, Fatemeh
    [J]. IEEE CONSUMER ELECTRONICS MAGAZINE, 2021, 10 (04) : 8 - 17
  • [2] DRAMD: Detect Advanced DRAM-based Stealthy Communication Channels with Neural Networks
    Lv, Zhiyuan
    Zhao, Youjian
    Zhang, Chao
    Li, Haibin
    [J]. IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 1907 - 1916
  • [3] Efficient Cache Resizing policy for DRAM-based LLCs in ChipMultiprocessors
    Agarwalla, Bindu
    Das, Shirshendu
    Sahu, Nilkanta
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2021, 113
  • [4] An Adaptive Partitioning Scheme for DRAM-based Cache in Solid State Drives
    Shim, Hyotaek
    Seo, Bon-Keun
    Kim, Jin-Soo
    Maeng, Seungryoul
    [J]. 2010 IEEE 26TH SYMPOSIUM ON MASS STORAGE SYSTEMS AND TECHNOLOGIES (MSST), 2010,
  • [5] DRAM-Based Victim Cache for Page Migration Mechanism on Heterogeneous Main Memory
    Pei, Songwen
    Qian, Yihuan
    Ye, Xiaochun
    Liu, Haikun
    Kong, Linghe
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (03): : 568 - 581
  • [6] Embedded DRAM-Based Memory Customization for Low-Cost FFT Processor Design
    Kang, Gyuseong
    Choi, Woong
    Park, Jongsun
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (12) : 3484 - 3494
  • [7] Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks
    Eckert, Charles
    Wang, Xiaowei
    Wang, Jingcheng
    Subramaniyan, Arun
    Iyer, Ravi
    Sylvester, Dennis
    Blaauw, David
    Das, Reetuparna
    [J]. IEEE MICRO, 2019, 39 (03) : 11 - 19
  • [8] Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks
    Eckert, Charles
    Wang, Xiaowei
    Wang, Jingcheng
    Subramaniyan, Arun
    Iyer, Ravi
    Sylvester, Dennis
    Blaauw, David
    Das, Reetuparna
    [J]. 2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 383 - 396
  • [9] Deep quantum neural networks on a superconducting processor
    Xiaoxuan Pan
    Zhide Lu
    Weiting Wang
    Ziyue Hua
    Yifang Xu
    Weikang Li
    Weizhou Cai
    Xuegang Li
    Haiyan Wang
    Yi-Pu Song
    Chang-Ling Zou
    Dong-Ling Deng
    Luyan Sun
    [J]. Nature Communications, 14
  • [10] Deep quantum neural networks on a superconducting processor
    Pan, Xiaoxuan
    Lu, Zhide
    Wang, Weiting
    Hua, Ziyue
    Xu, Yifang
    Li, Weikang
    Cai, Weizhou
    Li, Xuegang
    Wang, Haiyan
    Song, Yi-Pu
    Zou, Chang-Ling
    Deng, Dong-Ling
    Sun, Luyan
    [J]. NATURE COMMUNICATIONS, 2023, 14 (01)