Fine-grained Hardware Acceleration for Efficient Batteryless Intermittent Inference on the Edge

被引:4
|
作者
Caronti, Luca [1 ]
Akhunov, Khakim [1 ]
Nardello, Matteo [1 ]
Yildirim, Kasim Sinan [1 ]
Brunelli, Davide [1 ]
机构
[1] Univ Trento, Via Sommarive 9, I-38123 Trento, TN, Italy
关键词
Intermittent computing; convolutional neural networks; edge computing; energy harvesting; hardware accelerator; checkpointing;
D O I
10.1145/3608475
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Backing up the intermediate results of hardware-accelerated deep inference is crucial to ensure the progress of execution on batteryless computing platforms. However, hardware accelerators in low-power AI platforms only support the one-shot atomic execution of one neural network inference without any backups. This article introduces a new toolchain for MAX78000, which is a brand-new microcontroller with a hardware-based convolutional neural network (CNN) accelerator. Our toolchain converts any MAX78000-compatible neural network into an intermittently executable form. The toolchain enables finer checkpoint granularity on the MAX78000 CNN accelerator, allowing for backups of any intermediate neural network layer output. Based on the layer-by-layer CNN execution, we propose a new backup technique that performs only necessary (urgent) checkpoints. The method involves the batteryless system switching to ultra-low-power mode while charging, saving intermediate results only when input power is lower than ultra-low-power mode energy consumption. By avoiding unnecessary memory transfer, the proposed solution increases the inference throughput by 1.9x for simulation and by 1.2x for real-world setup compared to the coarse-grained baseline execution.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Fine-Grained Urban Flow Inference
    Ouyang, Kun
    Liang, Yuxuan
    Liu, Ye
    Tong, Zekun
    Ruan, Sijie
    Zheng, Yu
    Rosenblum, David S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (06) : 2755 - 2770
  • [2] Exploring Fine-Grained Sparsity in Convolutional Neural Networks for Efficient Inference
    Wang, Longguang
    Guo, Yulan
    Dong, Xiaoyu
    Wang, Yingqian
    Ying, Xinyi
    Lin, Zaiping
    An, Wei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4474 - 4493
  • [3] OfpCNN: On-Demand Fine-Grained Partitioning for CNN Inference Acceleration in Heterogeneous Devices
    Yang, Lei
    Zheng, Can
    Shen, Xiaoyuan
    Xie, Guoqi
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (12) : 3090 - 3103
  • [4] DTrace: fine-grained and efficient data integrity checking with hardware instruction tracing
    Wang, Xiayang
    Huang, Fuqian
    Chen, Haibo
    CYBERSECURITY, 2019, 2 (01)
  • [5] DTrace: fine-grained and efficient data integrity checking with hardware instruction tracing
    Xiayang Wang
    Fuqian Huang
    Haibo Chen
    Cybersecurity, 2
  • [6] Fine-Grained Entity Typing with Hierarchical Inference
    Ren, Quan
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2552 - 2558
  • [7] Distributed DNN Inference With Fine-Grained Model Partitioning in Mobile Edge Computing Networks
    Li, Hui
    Li, Xiuhua
    Fan, Qilin
    He, Qiang
    Wang, Xiaofei
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9060 - 9074
  • [8] FADES: Fine-Grained Edge Offloading with Unikernels
    Cozzolino, Vittorio
    Ding, Aaron Yi
    Ott, Joerg
    PROCEEDINGS OF THE 2017 WORKSHOP ON HOT TOPICS IN CONTAINER NETWORKING AND NETWORKED SYSTEMS (HOTCONNET 17), 2017, : 36 - 41
  • [9] Machine Learning for Fine-Grained Hardware Prefetcher Control
    Hiebel, Jason
    Brown, Laura E.
    Wang, Zhenlin
    PROCEEDINGS OF THE 48TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP 2019), 2019,
  • [10] Legba: Fast hardware support for fine-grained protection
    Wiggins, A
    Winwood, S
    Tuch, H
    Heiser, G
    ADVANCES IN COMPUTER SYSTEMS ARCHITECTURE, 2003, 2823 : 320 - 336