Hardware Acceleration Design of Convolutional Neural Networks Based on FPGA

被引:0
|
作者
Zhang, Guoning [1 ]
Hu, Jing [1 ]
Li, Laiquan [1 ]
Jiang, Haoyang [1 ]
机构
[1] Heilongjiang Univ, Integrated Circuit Engn, Harbin, Peoples R China
关键词
Cache Optimization; Fixed-Point Quantization; Multi-Channel Computation; Hardware Acceleration; OBJECT DETECTION;
D O I
10.1109/ICETIS61828.2024.10593714
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In today's rapidly advancing technological landscape, the applications of deep learning permeate various facets of our lives. However, traditional implementations of convolutional neural networks (CNNs) on platforms such as CPUs and GPUs often require substantial network bandwidth and incur high power consumption. Deploying CNNs on Field-Programmable Gate Arrays (FPGAs) with efficient logic control from CPUs offers a promising solution for low-power and compact hardware designs. This paper proposes a novel approach to optimize YOLOv3-tiny on FPGA, aiming to reduce hardware resource consumption and power usage while enhancing the computational efficiency of the convolutional neural network. Through hardware optimization strategies, our solution demonstrates improved performance, making it well-suited for real-time deep learning inference tasks in resource-constrained environments.
引用
收藏
页码:11 / 15
页数:5
相关论文
共 50 条
  • [31] A Comprehensive Review of Hardware Acceleration Techniques and Convolutional Neural Networks for EEG Signals
    Xie, Yu
    Oniga, Stefan
    SENSORS, 2024, 24 (17)
  • [32] Optimizing Loop Operation and Dataflow in FPGA Acceleration of Deep Convolutional Neural Networks
    Ma, Yufei
    Cao, Yu
    Vrudhula, Sarma
    Seo, Jae-sun
    FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2017, : 45 - 54
  • [33] Towards Hardware Trojan Resilient Design of Convolutional Neural Networks
    Sun, Peiyao
    Halak, Basel
    Kazmierski, Tomasz
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 130 - 135
  • [34] A Method for Accelerating Convolutional Neural Networks Based on FPGA
    Zhao, Mengxing
    Li, Xiang
    Zhu, Shunyi
    Zhou, Li
    2019 4TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION SYSTEMS (ICCIS 2019), 2019, : 241 - 246
  • [35] Design Space Exploration of FPGA Accelerators for Convolutional Neural Networks
    Rahman, Atul
    Oh, Sangyun
    Lee, Jongeun
    Choi, Kiyoung
    PROCEEDINGS OF THE 2017 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2017, : 1147 - 1152
  • [36] Acceleration Techniques for Automated Design of Approximate Convolutional Neural Networks
    Pinos, Michal
    Mrazek, Vojtech
    Vaverka, Filip
    Vasicek, Zdenek
    Sekanina, Lukas
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (01) : 212 - 224
  • [37] Hardware Acceleration of Graph Neural Networks
    Auten, Adam
    Tomei, Matthew
    Kumar, Rakesh
    PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [38] HFP: Hardware-Aware Filter Pruning for Deep Convolutional Neural Networks Acceleration
    Yu, Fang
    Han, Chuanqi
    Wang, Pengcheng
    Huang, Ruoran
    Huang, Xi
    Cui, Li
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 255 - 262
  • [39] HILP: hardware-in-loop pruning of convolutional neural networks towards inference acceleration
    Dong Li
    Qianqian Ye
    Xiaoyue Guo
    Yunda Sun
    Li Zhang
    Neural Computing and Applications, 2024, 36 : 8825 - 8842
  • [40] HILP: hardware-in-loop pruning of convolutional neural networks towards inference acceleration
    Li, Dong
    Ye, Qianqian
    Guo, Xiaoyue
    Sun, Yunda
    Zhang, Li
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (15): : 8825 - 8842