FPGA based neural network accelerators

被引:7
|
作者
Kim, Joo-Young [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
关键词
DESIGN; CNN;
D O I
10.1016/bs.adcom.2020.11.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning (ML) and artificial intelligence (AI) technology are revolutionizing many different fields of study in computer science as well as a wide range of industry sectors such as information technology, mobile communication, automotive, and manufacturing. As more people are using the technology in their everyday life, the demand for new hardware that enables faster and more energy-efficient AI processing is ever increasing. Over the last few years, traditional hardware makers such as Intel and Nvidia as well as start-up companies such as Graphcore and Habana Labs were trying to offer the best computing platform for complex AI workloads. Although GPU still remains the most preferred platform due to its generic programming interface, it is certainly not suitable for mobile/edge applications due to its low hardware utilization and huge power consumption. On the other hand, FPGA is a promising hardware platform for accelerating deep neural networks (DNNs) thanks to its re-programmability and power efficiency. In this chapter, we review essential computations in latest DNN models and their algorithmic optimizations. We then investigate various accelerator architectures based on FPGAs and design automation frameworks. Finally, we discuss the device's strengths and weaknesses over other types of hardware platforms and conclude with future research directions.
引用
收藏
页码:135 / 165
页数:31
相关论文
共 50 条
  • [1] [DL] A Survey of FPGA-based Neural Network Inference Accelerators
    Guo, Kaiyuan
    Zeng, Shulin
    Yu, Jincheng
    Wang, Yu
    Yang, Huazhong
    [J]. ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2019, 12 (01)
  • [2] Exploration and Generation of Efficient FPGA-based Deep Neural Network Accelerators
    Ali, Nermine
    Philippe, Jean-Marc
    Tain, Benoit
    Coussy, Philippe
    [J]. 2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 123 - 128
  • [3] Remote Identification of Neural Network FPGA Accelerators by Power Fingerprints
    Meyers, Vincent
    Hefenbrock, Michael
    Gnad, Dennis
    Tahoori, Mehdi
    [J]. 2023 33RD INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, FPL, 2023, : 259 - 264
  • [4] Using Data Compression for Optimizing FPGA-Based Convolutional Neural Network Accelerators
    Guan, Yijin
    Xu, Ningyi
    Zhang, Chen
    Yuan, Zhihang
    Cong, Jason
    [J]. ADVANCED PARALLEL PROCESSING TECHNOLOGIES, 2017, 10561 : 14 - 26
  • [5] DeepBurning: Automatic Generation of FPGA-based Learning Accelerators for the Neural Network Family
    Wang, Ying
    Xu, Jie
    Han, Yinhe
    Li, Huawei
    Li, Xiaowei
    [J]. 2016 ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2016,
  • [6] Soft Error Mitigation for Deep Convolution Neural Network on FPGA Accelerators
    Li, Wenshuo
    Ge, Guangjun
    Guo, Kaiyuan
    Chen, Xiaoming
    Wei, Qi
    Gao, Zhen
    Wang, Yu
    Yang, Huazhong
    [J]. 2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 1 - 5
  • [7] A survey of FPGA-based accelerators for convolutional neural networks
    Sparsh Mittal
    [J]. Neural Computing and Applications, 2020, 32 : 1109 - 1139
  • [8] A survey of FPGA-based accelerators for convolutional neural networks
    Mittal, Sparsh
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (04): : 1109 - 1139
  • [9] WinoNN: Optimizing FPGA-Based Convolutional Neural Network Accelerators Using Sparse Winograd Algorithm
    Wang, Xuan
    Wang, Chao
    Cao, Jing
    Gong, Lei
    Zhou, Xuehai
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 4290 - 4302
  • [10] FPGA-based neural network accelerators for millimeter-wave radio-over-fiber systems
    Lee, Jeonghun
    He, Jiayuan
    Wang, Ke
    [J]. OPTICS EXPRESS, 2020, 28 (09) : 13384 - 13400