Energy-Efficient Architecture for CNNs Inference on Heterogeneous FPGA

被引:14
|
作者
Spagnolo, Fanny [1 ]
Perri, Stefania [2 ]
Frustaci, Fabio [1 ]
Corsonello, Pasquale [1 ]
机构
[1] Univ Calabria, Dept Informat Modeling Elect & Syst Engn, I-87036 Arcavacata Di Rende, Italy
[2] Univ Calabria, Dept Mech Energy & Management Engn, I-87036 Arcavacata Di Rende, Italy
关键词
convolutional neural networks; heterogeneous FPGAs; embedded systems; DEEP NEURAL-NETWORKS; ACCELERATOR;
D O I
10.3390/jlpea10010001
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the huge requirements in terms of both computational and memory capabilities, implementing energy-efficient and high-performance Convolutional Neural Networks (CNNs) by exploiting embedded systems still represents a major challenge for hardware designers. This paper presents the complete design of a heterogeneous embedded system realized by using a Field-Programmable Gate Array Systems-on-Chip (SoC) and suitable to accelerate the inference of Convolutional Neural Networks in power-constrained environments, such as those related to IoT applications. The proposed architecture is validated through its exploitation in large-scale CNNs on low-cost devices. The prototype realized on a Zynq XC7Z045 device achieves a power efficiency up to 135 Gops/W. When the VGG-16 model is inferred, a frame rate up to 11.8 fps is reached.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Energy-efficient replacement schemes for heterogeneous drive
    [J]. Yang, L. (yang.lianghuai@gmail.com), 1600, Science Press (50):
  • [42] On Energy-Efficient Edge Caching in Heterogeneous Networks
    Gabry, Frederic
    Bioglio, Valerio
    Land, Ingmar
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2016, 34 (12) : 3288 - 3298
  • [43] An Energy-Efficient FPGA-based Matrix Multiplier
    Tan, Yiyu
    Imamura, Toshiyuki
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2017, : 514 - 517
  • [44] Energy-Efficient Computing Acceleration of Unmanned Aerial Vehicles Based on a CPU/FPGA/NPU Heterogeneous System
    Liu, Xing
    Xu, Wenxing
    Wang, Qing
    Zhang, Mengya
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (16): : 27126 - 27138
  • [45] Energy-efficient deep learning inference on edge devices
    Daghero, Francesco
    Pagliari, Daniele Jahier
    Poncino, Massimo
    [J]. HARDWARE ACCELERATOR SYSTEMS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2021, 122 : 247 - 301
  • [46] Energy-efficient Amortized Inference with Cascaded Deep Classifiers
    Guan, Jiaqi
    Liu, Yang
    Liu, Qiang
    Peng, Jian
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2184 - 2190
  • [47] An Embedded Architecture for Energy-Efficient Stream Computing
    Panda, Amrit
    Chatha, Karam S.
    [J]. IEEE EMBEDDED SYSTEMS LETTERS, 2014, 6 (03) : 57 - 60
  • [48] Energy-efficient Adaptive Wireless NoCs Architecture
    DiTomaso, Dominic
    Kodi, Avinash
    Matolak, David
    Kaya, Savas
    Laha, Soumyasanta
    Rayess, William
    [J]. 2013 SEVENTH IEEE/ACM INTERNATIONAL SYMPOSIUM ON NETWORKS-ON-CHIP (NOCS 2013), 2013,
  • [49] Energy-efficient buffer architecture for flash memory
    Huang, W. T.
    Chen, C. T.
    Chen, C. H.
    Cheng, C. C.
    [J]. MUE: 2008 INTERNATIONAL CONFERENCE ON MULTIMEDIA AND UBIQUITOUS ENGINEERING, PROCEEDINGS, 2008, : 543 - +
  • [50] Designing the Most Energy-Efficient Recommendation Inference Chip
    Lin, Youn-Long
    Kao, Joe
    Chen, Kinny
    [J]. 2023 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI-TSA/VLSI-DAT, 2023,