Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

被引:44
|
作者
Li, Jing [1 ]
Liang, Weifa [2 ]
Li, Yuchen [1 ]
Xu, Zichuan [3 ]
Jia, Xiaohua [2 ]
Guo, Song [4 ]
机构
[1] Australian Natl Univ, Sch Comp, Canberra, ACT 0200, Australia
[2] City Univ Hong Kong, Dept Comp Sci, 83 Tat Chee Ave, Hong Kong, Peoples R China
[3] Dalian Univ Technol, Sch Software, Dalian 116024, Liaoning, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Inference algorithms; Delays; Partitioning algorithms; Computational modeling; Task analysis; Approximation algorithms; Parallel processing; Mobile edge computing (MEC); DNN model inference provisioning; throughput maximization; Intelligent IoT devices; approximation and online algorithms; delay-aware DNN inference; DNN partitioning; inference parallelism; computing and bandwidth resource allocation and optimization; algorithm design and analysis; CLOUD;
D O I
10.1109/TMC.2021.3125949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading compute-intensive tasks to MEC networks for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a user request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed in the local IoT device of the request, and another part is processed in a cloudlet (edge server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet to which the request is assigned. In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread execution parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising
引用
收藏
页码:3017 / 3030
页数:14
相关论文
共 50 条
  • [31] Resource-Efficient DNN Inference With Early Exiting in Serverless Edge Computing
    Guo, Xiaolin
    Dong, Fang
    Shen, Dian
    Huang, Zhaowu
    Zhang, Jinghui
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3650 - 3666
  • [32] DNN Real-Time Collaborative Inference Acceleration with Mobile Edge Computing
    Yang, Run
    Li, Yan
    He, Hui
    Zhang, Weizhe
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [33] Performance Evaluation of State-of-the-Art Edge Computing Devices for DNN Inference
    Rancano, Xalo
    Molanes, Roberto Fernandez
    Gonzalez-Val, Carlos
    Rodriguez-Andina, Juan J.
    Farina, Jose
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 2286 - 2291
  • [34] CoEdge: Cooperative DNN Inference With Adaptive Workload Partitioning Over Heterogeneous Edge Devices
    Zeng, Liekang
    Chen, Xu
    Zhou, Zhi
    Yang, Lei
    Zhang, Junshan
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2021, 29 (02) : 595 - 608
  • [35] Mistify: Automating DNN Model Porting for On-Device Inference at the Edge
    Guo, Peizhen
    Hu, Bo
    Hu, Wenjun
    PROCEEDINGS OF THE 18TH USENIX SYMPOSIUM ON NETWORKED SYSTEM DESIGN AND IMPLEMENTATION, 2021, : 705 - 720
  • [36] End-to-End Delay Minimization based on Joint Optimization of DNN Partitioning and Resource Allocation for Cooperative Edge Inference
    Ye, Xinrui
    Sun, Yanzan
    Wen, Dingzhu
    Pan, Guanjin
    Zhang, Shunqing
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [37] Adaptive Workload Distribution for Accuracy-aware DNN Inference on Collaborative Edge Platforms
    Taufique, Zain
    Miele, Antonio
    Liljeberg, Pasi
    Kanduri, Anil
    29TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2024, 2024, : 109 - 114
  • [38] A DNN inference acceleration algorithm combining model partition and task allocation in heterogeneous edge computing system
    Lei Shi
    Zhigang Xu
    Yabo Sun
    Yi Shi
    Yuqi Fan
    Xu Ding
    Peer-to-Peer Networking and Applications, 2021, 14 : 4031 - 4045
  • [39] A DNN inference acceleration algorithm combining model partition and task allocation in heterogeneous edge computing system
    Shi, Lei
    Xu, Zhigang
    Sun, Yabo
    Shi, Yi
    Fan, Yuqi
    Ding, Xu
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2021, 14 (06) : 4031 - 4045
  • [40] Optimal DNN Inference Delay Minimization for Chain-Structured Roadside Edge Networks
    Wan, Xili
    Ji, Tingxiang
    Guan, Xinjie
    Zhu, Aichun
    Ye, Feng
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (12) : 16731 - 16736