An overview memristor based hardware accelerators for deep neural network

被引:4
|
作者
Gokgoz, Baki [1 ]
Gul, Fatih [2 ,4 ]
Aydin, Tolga [3 ]
机构
[1] Gumushane Univ, Torul Vocat Sch, Dept Comp Technol, Gumushane, Turkiye
[2] Recep Tayyip Erdogan Univ, Fac Engn & Architecture, Elect & Elect Engn, Rize, Turkiye
[3] Ataturk Univ, Fac Engn, Comp Engn, Erzurum, Turkiye
[4] Recep Tayyip Erdogan Univ, Dept Elect Elect Engn, Rize, Turkiye
来源
关键词
AI accelerators; deep learning; memristors; neuromorphic computing; synapses; TIMING-DEPENDENT PLASTICITY; RANDOM-ACCESS MEMORY; SYNAPTIC PLASTICITY; SPIKING; CIRCUIT; CMOS; RECOGNITION; DEVICES; DESIGN; ARCHITECTURE;
D O I
10.1002/cpe.7997
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The prevalence of artificial intelligence applications using artificial neural network architectures for functions such as natural language processing, text prediction, object detection, speech, and image recognition has significantly increased in today's world. The computational functions performed by artificial neural networks in classical applications require intensive and large-scale data movement between memory and processing units. Various software and hardware efforts are being made to perform these operations more efficiently. Despite these efforts, latency in data traffic and the substantial amount of energy consumed in data processing emerge as bottleneck disadvantages of the Von Neumann architecture. To overcome this bottleneck problem, it is necessary to develop hardware units specific to artificial intelligence applications. For this purpose, neuro-inspired computing chips are believed to provide an effective approach by designing and integrating a set of features inspired by neurobiological systems at the hardware level to address the problems arising in artificial intelligence applications. The most notable among these approaches is memristor-based neuromorphic computing systems. Memristors are seen as promising devices for hardware-level improvement in terms of speed and energy because they possess non-volatile memory and exhibit analog behavior. They enable effective storage and processing of synaptic weights, offering solutions for hardware-level development. Taking into account these advantages of memristors, this study examines the research conducted on artificial neural networks and hardware that can directly perform deep learning functions and mimic the biological brain, which is different from classical systems in today's context.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Deep Neural Network-Based Accelerators for Repetitive Boolean Logic Evaluation
    Senarathna, Danushka
    Tragoudas, Spyros
    2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC, 2023, : 290 - 295
  • [42] A Survey on Memory Subsystems for Deep Neural Network Accelerators
    Asad, Arghavan
    Kaur, Rupinder
    Mohammadi, Farah
    FUTURE INTERNET, 2022, 14 (05):
  • [43] Dynamic Precision Multiplier For Deep Neural Network Accelerators
    Ding, Chen
    Yuxiang, Huan
    Zheng, Lirong
    Zou, Zhuo
    2020 IEEE 33RD INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2020, : 180 - 184
  • [44] Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications
    Azghadi, Mostafa Rahimi
    Lammie, Corey
    Eshraghian, Jason K.
    Payvand, Melika
    Donati, Elisa
    Linares-Barranco, Bernabe
    Indiveri, Giacomo
    IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 2020, 14 (06) : 1138 - 1159
  • [45] Quality-driven design of deep neural network hardware accelerators for low power CPS and IoT applications
    Jan, Yahya
    Jozwiak, Lech
    MICROPROCESSORS AND MICROSYSTEMS, 2024, 111
  • [46] Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators
    Isakov, Mihailo
    Bu, Lake
    Cheng, Hai
    Kinsy, Michel A.
    PROCEEDINGS OF THE 2018 ASIAN HARDWARE ORIENTED SECURITY AND TRUST SYMPOSIUM (ASIANHOST), 2018, : 62 - 67
  • [47] Hardware Accelerators for a Convolutional Neural Network in Condition Monitoring of CNC Machines
    Hoyer, Ingo
    Berg, Oscar
    Krupp, Lukas
    Utz, Alexander
    Wiede, Christian
    Seidl, Karsten
    2023 IEEE SENSORS, 2023,
  • [48] Deep neural network-based relation extraction: an overview
    Hailin Wang
    Ke Qin
    Rufai Yusuf Zakari
    Guoming Lu
    Jin Yin
    Neural Computing and Applications, 2022, 34 : 4781 - 4801
  • [49] Deep neural network-based relation extraction: an overview
    Wang, Hailin
    Qin, Ke
    Zakari, Rufai Yusuf
    Lu, Guoming
    Yin, Jin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (06): : 4781 - 4801
  • [50] Deep Neural Network Hardware Implementation Based on Stacked Sparse Autoencoder
    Coutinho, Maria G. F.
    Torquato, Matheus F.
    Fernandes, Marcelo A. C.
    IEEE ACCESS, 2019, 7 : 40674 - 40694