Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

被引:44
|
作者
Li, Jing [1 ]
Liang, Weifa [2 ]
Li, Yuchen [1 ]
Xu, Zichuan [3 ]
Jia, Xiaohua [2 ]
Guo, Song [4 ]
机构
[1] Australian Natl Univ, Sch Comp, Canberra, ACT 0200, Australia
[2] City Univ Hong Kong, Dept Comp Sci, 83 Tat Chee Ave, Hong Kong, Peoples R China
[3] Dalian Univ Technol, Sch Software, Dalian 116024, Liaoning, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Inference algorithms; Delays; Partitioning algorithms; Computational modeling; Task analysis; Approximation algorithms; Parallel processing; Mobile edge computing (MEC); DNN model inference provisioning; throughput maximization; Intelligent IoT devices; approximation and online algorithms; delay-aware DNN inference; DNN partitioning; inference parallelism; computing and bandwidth resource allocation and optimization; algorithm design and analysis; CLOUD;
D O I
10.1109/TMC.2021.3125949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading compute-intensive tasks to MEC networks for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a user request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed in the local IoT device of the request, and another part is processed in a cloudlet (edge server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet to which the request is assigned. In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread execution parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising
引用
收藏
页码:3017 / 3030
页数:14
相关论文
共 50 条
  • [1] Delay-Aware DNN Inference Throughput Maximization in Edge Computing via Jointly Exploring Partitioning and Parallelism
    Li, Jing
    Liang, Weifa
    Li, Yuchen
    Xu, Zichuan
    Jia, Xiaohua
    PROCEEDINGS OF THE IEEE 46TH CONFERENCE ON LOCAL COMPUTER NETWORKS (LCN 2021), 2021, : 193 - 200
  • [2] Joint DNN partitioning and resource allocation for completion rate maximization of delay-aware DNN inference tasks in wireless powered mobile edge computing
    Xianzhong Tian
    Pengcheng Xu
    Yifan Shen
    Yuheng Shao
    Peer-to-Peer Networking and Applications, 2023, 16 (6) : 2865 - 2878
  • [3] Joint DNN partitioning and resource allocation for completion rate maximization of delay-aware DNN inference tasks in wireless powered mobile edge computing
    Tian, Xianzhong
    Xu, Pengcheng
    Shen, Yifan
    Shao, Yuheng
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (06) : 2865 - 2878
  • [4] DNN Partitioning for Inference Throughput Acceleration at the Edge
    Feltin, Thomas
    Marcho, Leo
    Cordero-Fuertes, Juan-Antonio
    Brockners, Frank
    Clausen, Thomas H.
    IEEE ACCESS, 2023, 11 : 52236 - 52249
  • [5] DNN Inference Acceleration with Partitioning and Early Exiting in Edge Computing
    Li, Chao
    Xu, Hongli
    Xu, Yang
    Wang, Zhiyuan
    Huang, Liusheng
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I, 2021, 12937 : 465 - 478
  • [6] DNN Placement and Inference in Edge Computing
    Bensalem, Mounir
    Dizdarevic, Jasenka
    Jukan, Admela
    2020 43RD INTERNATIONAL CONVENTION ON INFORMATION, COMMUNICATION AND ELECTRONIC TECHNOLOGY (MIPRO 2020), 2020, : 479 - 484
  • [7] DNN Surgery: Accelerating DNN Inference on the Edge Through Layer Partitioning
    Liang, Huanghuang
    Sang, Qianlong
    Hu, Chuang
    Cheng, Dazhao
    Zhou, Xiaobo
    Wang, Dan
    Bao, Wei
    Wang, Yu
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2023, 11 (03) : 3111 - 3125
  • [8] Joint Optimization of Device Placement and Model Partitioning for Cooperative DNN Inference in Heterogeneous Edge Computing
    Dai, Penglin
    Han, Biao
    Li, Ke
    Xu, Xincao
    Xing, Huanlai
    Liu, Kai
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (01) : 210 - 226
  • [9] Distributed DNN Inference With Fine-Grained Model Partitioning in Mobile Edge Computing Networks
    Li, Hui
    Li, Xiuhua
    Fan, Qilin
    He, Qiang
    Wang, Xiaofei
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9060 - 9074
  • [10] Elastic DNN Inference With Unpredictable Exit in Edge Computing
    Huang, Jiaming
    Gao, Yi
    Dong, Wei
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 14005 - 14016