An overview memristor based hardware accelerators for deep neural network

被引:4
|
作者
Gokgoz, Baki [1 ]
Gul, Fatih [2 ,4 ]
Aydin, Tolga [3 ]
机构
[1] Gumushane Univ, Torul Vocat Sch, Dept Comp Technol, Gumushane, Turkiye
[2] Recep Tayyip Erdogan Univ, Fac Engn & Architecture, Elect & Elect Engn, Rize, Turkiye
[3] Ataturk Univ, Fac Engn, Comp Engn, Erzurum, Turkiye
[4] Recep Tayyip Erdogan Univ, Dept Elect Elect Engn, Rize, Turkiye
来源
关键词
AI accelerators; deep learning; memristors; neuromorphic computing; synapses; TIMING-DEPENDENT PLASTICITY; RANDOM-ACCESS MEMORY; SYNAPTIC PLASTICITY; SPIKING; CIRCUIT; CMOS; RECOGNITION; DEVICES; DESIGN; ARCHITECTURE;
D O I
10.1002/cpe.7997
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The prevalence of artificial intelligence applications using artificial neural network architectures for functions such as natural language processing, text prediction, object detection, speech, and image recognition has significantly increased in today's world. The computational functions performed by artificial neural networks in classical applications require intensive and large-scale data movement between memory and processing units. Various software and hardware efforts are being made to perform these operations more efficiently. Despite these efforts, latency in data traffic and the substantial amount of energy consumed in data processing emerge as bottleneck disadvantages of the Von Neumann architecture. To overcome this bottleneck problem, it is necessary to develop hardware units specific to artificial intelligence applications. For this purpose, neuro-inspired computing chips are believed to provide an effective approach by designing and integrating a set of features inspired by neurobiological systems at the hardware level to address the problems arising in artificial intelligence applications. The most notable among these approaches is memristor-based neuromorphic computing systems. Memristors are seen as promising devices for hardware-level improvement in terms of speed and energy because they possess non-volatile memory and exhibit analog behavior. They enable effective storage and processing of synaptic weights, offering solutions for hardware-level development. Taking into account these advantages of memristors, this study examines the research conducted on artificial neural networks and hardware that can directly perform deep learning functions and mimic the biological brain, which is different from classical systems in today's context.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Fully hardware-implemented memristor convolutional neural network
    Peng Yao
    Huaqiang Wu
    Bin Gao
    Jianshi Tang
    Qingtian Zhang
    Wenqiang Zhang
    J. Joshua Yang
    He Qian
    Nature, 2020, 577 : 641 - 646
  • [22] Fully hardware-implemented memristor convolutional neural network
    Yao, Peng
    Wu, Huaqiang
    Gao, Bin
    Tang, Jianshi
    Zhang, Qingtian
    Zhang, Wenqiang
    Yang, J. Joshua
    Qian, He
    NATURE, 2020, 577 (7792) : 641 - 646
  • [23] A Survey of Network-Based Hardware Accelerators
    Skliarova, Iouliia
    ELECTRONICS, 2022, 11 (07)
  • [24] Review of ASIC accelerators for deep neural network
    Machupalli, Raju
    Hossain, Masum
    Mandal, Mrinal
    MICROPROCESSORS AND MICROSYSTEMS, 2022, 89
  • [25] Approximate Adders for Deep Neural Network Accelerators
    Raghuram, S.
    Shashank, N.
    2022 35TH INTERNATIONAL CONFERENCE ON VLSI DESIGN (VLSID 2022) HELD CONCURRENTLY WITH 2022 21ST INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS (ES 2022), 2022, : 210 - 215
  • [26] Adaptable Approximation Based on Bit Decomposition for Deep Neural Network Accelerators
    Soliman, Taha
    De la Parra, Cecilia
    Guntoro, Andre
    Wehn, Norbert
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [27] An Automated Design Flow for Adaptive Neural Network Hardware Accelerators
    Francesco Ratto
    Ángela Porras Máinez
    Carlo Sau
    Paolo Meloni
    Gianfranco Deriu
    Stefano Delucchi
    Massimo Massa
    Luigi Raffo
    Francesca Palumbo
    Journal of Signal Processing Systems, 2023, 95 : 1091 - 1113
  • [28] Implementation of the SoftMax Activation for Reconfigurable Neural Network Hardware Accelerators
    Shatravin, Vladislav
    Shashev, Dmitriy
    Shidlovskiy, Stanislav
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [29] An Automated Design Flow for Adaptive Neural Network Hardware Accelerators
    Ratto, Francesco
    Mainez, Angela Porras
    Sau, Carlo
    Meloni, Paolo
    Deriu, Gianfranco
    Delucchi, Stefano
    Massa, Massimo
    Raffo, Luigi
    Palumbo, Francesca
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2023, 95 (09): : 1091 - 1113
  • [30] Design Tradeoff of Internal Memory Size and Memory Access Energy in Deep Neural Network Hardware Accelerators
    Hsiao, Shen-Fu
    Wu, Pei-Hsuen
    2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018), 2018, : 735 - 736