Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

被引:44
|
作者
Li, Jing [1 ]
Liang, Weifa [2 ]
Li, Yuchen [1 ]
Xu, Zichuan [3 ]
Jia, Xiaohua [2 ]
Guo, Song [4 ]
机构
[1] Australian Natl Univ, Sch Comp, Canberra, ACT 0200, Australia
[2] City Univ Hong Kong, Dept Comp Sci, 83 Tat Chee Ave, Hong Kong, Peoples R China
[3] Dalian Univ Technol, Sch Software, Dalian 116024, Liaoning, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Inference algorithms; Delays; Partitioning algorithms; Computational modeling; Task analysis; Approximation algorithms; Parallel processing; Mobile edge computing (MEC); DNN model inference provisioning; throughput maximization; Intelligent IoT devices; approximation and online algorithms; delay-aware DNN inference; DNN partitioning; inference parallelism; computing and bandwidth resource allocation and optimization; algorithm design and analysis; CLOUD;
D O I
10.1109/TMC.2021.3125949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading compute-intensive tasks to MEC networks for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a user request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed in the local IoT device of the request, and another part is processed in a cloudlet (edge server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet to which the request is assigned. In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread execution parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising
引用
收藏
页码:3017 / 3030
页数:14
相关论文
共 50 条
  • [21] Modeling of Deep Neural Network (DNN) Placement and Inference in Edge Computing
    Bensalem, Mounir
    Dizdarevic, Jasenka
    Jukan, Admela
    2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2020,
  • [22] Privacy-Aware Edge Computing Based on Adaptive DNN Partitioning
    Shi, Chengshuai
    Chen, Lixing
    Shen, Cong
    Song, Linqi
    Xu, Jie
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [23] DNN Surgery: Accelerating DNN Inference on the Edge through Layer Partitioning (vol 11, pg 3111, 2023)
    Liang, Huanghuang
    Sang, Qianlong
    Hu, Chuang
    Cheng, Dazhao
    Zhou, Xiaobo
    Wang, Dan
    Bao, Wei
    Wang, Yu
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2024, 12 (03) : 966 - 966
  • [24] ADDA: Adaptive Distributed DNN Inference Acceleration in Edge Computing Environment
    Wang, Huitian
    Cai, Guangxing
    Huang, Zhaowu
    Dong, Fang
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 438 - 445
  • [25] Coarse-to-Fine: A hierarchical DNN inference framework for edge computing
    Zhang, Zao
    Zhang, Yuning
    Bao, Wei
    Li, Changyang
    Yuan, Dong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 157 : 180 - 192
  • [26] Cutting-Edge Inference: Dynamic DNN Model Partitioning and Resource Scaling for Mobile AI
    Lim, Jeong-A
    Lee, Joohyun
    Kwak, Jeongho
    Kim, Yeongjin
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 3300 - 3316
  • [27] QoS-Aware Irregular Collaborative Inference for Improving Throughput of DNN Services
    Fu, Kaihua
    Shi, Jiuchen
    Chen, Quan
    Zheng, Ningxin
    Zhang, Wei
    Zeng, Deze
    Guo, Minyi
    SC22: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2022,
  • [28] Memory-aware and context-aware multi-DNN inference on the edge
    Cox, Bart
    Birke, Robert
    Chen, Lydia Y.
    PERVASIVE AND MOBILE COMPUTING, 2022, 83
  • [29] Edge intelligence in motion: Mobility-aware dynamic DNN inference service migration with downtime in mobile edge computing
    Wang, Pu
    Ouyang, Tao
    Liao, Guocheng
    Gong, Jie
    Yu, Shuai
    Chen, Xu
    Journal of Systems Architecture, 2022, 130
  • [30] Edge intelligence in motion: Mobility-aware dynamic DNN inference service migration with downtime in mobile edge computing
    Wang, Pu
    Ouyang, Tao
    Liao, Guocheng
    Gong, Jie
    Yu, Shuai
    Chen, Xu
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 130