An overview memristor based hardware accelerators for deep neural network

被引:4
|
作者
Gokgoz, Baki [1 ]
Gul, Fatih [2 ,4 ]
Aydin, Tolga [3 ]
机构
[1] Gumushane Univ, Torul Vocat Sch, Dept Comp Technol, Gumushane, Turkiye
[2] Recep Tayyip Erdogan Univ, Fac Engn & Architecture, Elect & Elect Engn, Rize, Turkiye
[3] Ataturk Univ, Fac Engn, Comp Engn, Erzurum, Turkiye
[4] Recep Tayyip Erdogan Univ, Dept Elect Elect Engn, Rize, Turkiye
来源
关键词
AI accelerators; deep learning; memristors; neuromorphic computing; synapses; TIMING-DEPENDENT PLASTICITY; RANDOM-ACCESS MEMORY; SYNAPTIC PLASTICITY; SPIKING; CIRCUIT; CMOS; RECOGNITION; DEVICES; DESIGN; ARCHITECTURE;
D O I
10.1002/cpe.7997
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The prevalence of artificial intelligence applications using artificial neural network architectures for functions such as natural language processing, text prediction, object detection, speech, and image recognition has significantly increased in today's world. The computational functions performed by artificial neural networks in classical applications require intensive and large-scale data movement between memory and processing units. Various software and hardware efforts are being made to perform these operations more efficiently. Despite these efforts, latency in data traffic and the substantial amount of energy consumed in data processing emerge as bottleneck disadvantages of the Von Neumann architecture. To overcome this bottleneck problem, it is necessary to develop hardware units specific to artificial intelligence applications. For this purpose, neuro-inspired computing chips are believed to provide an effective approach by designing and integrating a set of features inspired by neurobiological systems at the hardware level to address the problems arising in artificial intelligence applications. The most notable among these approaches is memristor-based neuromorphic computing systems. Memristors are seen as promising devices for hardware-level improvement in terms of speed and energy because they possess non-volatile memory and exhibit analog behavior. They enable effective storage and processing of synaptic weights, offering solutions for hardware-level development. Taking into account these advantages of memristors, this study examines the research conducted on artificial neural networks and hardware that can directly perform deep learning functions and mimic the biological brain, which is different from classical systems in today's context.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Heterogeneous Hardware Accelerators Interconnect: An Overview
    Cuong Pham-Quoc
    Al-Ars, Zaid
    Bertels, Koen
    2013 NASA/ESA CONFERENCE ON ADAPTIVE HARDWARE AND SYSTEMS (AHS), 2013, : 189 - 195
  • [32] Resistive Neural Hardware Accelerators
    Smagulova, Kamilya
    Fouda, Mohammed E.
    Kurdahi, Fadi
    Salama, Khaled N.
    Eltawil, Ahmed
    PROCEEDINGS OF THE IEEE, 2023, 111 (05) : 500 - 527
  • [33] Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs
    Ding, Caiwen
    Ren, Ao
    Yuan, Geng
    Ma, Xiaolong
    Li, Jiayu
    Liu, Ning
    Yuan, Bo
    Wang, Yanzhi
    PROCEEDINGS OF THE 2018 GREAT LAKES SYMPOSIUM ON VLSI (GLSVLSI'18), 2018, : 353 - 358
  • [34] Architectures and Circuits for Analog-memory-based Hardware Accelerators for Deep Neural Networks (Invited)
    Tsai, Hsinyu
    Narayanan, Pritish
    Jain, Shubham
    Ambrogio, Stefano
    Hosokawa, Kohji
    Ishii, Masatoshi
    Mackin, Charles
    Chen, Ching-Tzu
    Okazaki, Atsuya
    Nomura, Akiyo
    Boybat, Irem
    Muralidhar, Ramachandran
    Frank, Martin M.
    Yasuda, Takeo
    Friz, Alexander
    Kohda, Yasuteru
    Chen, An
    Fasoli, Andrea
    Rasch, Malte J.
    Wozniak, Stanislaw
    Luquin, Jose
    Narayanan, Vijay
    Burr, Geoffrey W.
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [35] The LSTM Neural Network Based on Memristor
    Chu, Ziqi
    Xu, Hui
    Liu, Haijun
    2020 3RD INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SCIENCE AND APPLICATION TECHNOLOGY (CISAT) 2020, 2020, 1634
  • [36] Low power & mobile hardware accelerators for deep convolutional neural networks
    Scanlan, Anthony G.
    INTEGRATION-THE VLSI JOURNAL, 2019, 65 : 110 - 127
  • [37] Modeling and Demonstration of Hardware-based Deep Neural Network (DNN) Inference using Memristor Crossbar Array considering Signal Integrity
    Shin, Taein
    Park, Shinyoung
    Kim, Seongguk
    Park, Hyunwook
    Lho, Daehwan
    Kim, Subin
    Son, Kyungjune
    Park, Gapyeol
    Kim, Joungho
    2020 IEEE INTERNATIONAL SYMPOSIUM ON ELECTROMAGNETIC COMPATIBILITY AND SIGNAL & POWER INTEGRITY VIRTUAL SYMPOSIUM(IEEE EMC+SIPI), 2020, : 417 - 421
  • [38] Exploration and Generation of Efficient FPGA-based Deep Neural Network Accelerators
    Ali, Nermine
    Philippe, Jean-Marc
    Tain, Benoit
    Coussy, Philippe
    2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 123 - 128
  • [39] Efficient Compression Technique for NoC-based Deep Neural Network Accelerators
    Lorandel, Jordane
    Lahdhiri, Habiba
    Bourdel, Emmanuelle
    Monteleone, Salvatore
    Palesi, Maurizio
    2020 23RD EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2020), 2020, : 174 - 179
  • [40] FPGA based neural network accelerators
    Kim, Joo-Young
    HARDWARE ACCELERATOR SYSTEMS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2021, 122 : 135 - 165