Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

被引:44
|
作者
Li, Jing [1 ]
Liang, Weifa [2 ]
Li, Yuchen [1 ]
Xu, Zichuan [3 ]
Jia, Xiaohua [2 ]
Guo, Song [4 ]
机构
[1] Australian Natl Univ, Sch Comp, Canberra, ACT 0200, Australia
[2] City Univ Hong Kong, Dept Comp Sci, 83 Tat Chee Ave, Hong Kong, Peoples R China
[3] Dalian Univ Technol, Sch Software, Dalian 116024, Liaoning, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Inference algorithms; Delays; Partitioning algorithms; Computational modeling; Task analysis; Approximation algorithms; Parallel processing; Mobile edge computing (MEC); DNN model inference provisioning; throughput maximization; Intelligent IoT devices; approximation and online algorithms; delay-aware DNN inference; DNN partitioning; inference parallelism; computing and bandwidth resource allocation and optimization; algorithm design and analysis; CLOUD;
D O I
10.1109/TMC.2021.3125949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading compute-intensive tasks to MEC networks for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a user request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed in the local IoT device of the request, and another part is processed in a cloudlet (edge server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet to which the request is assigned. In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread execution parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising
引用
收藏
页码:3017 / 3030
页数:14
相关论文
共 50 条
  • [41] ASPM: Reliability-Oriented DNN Inference Partition and Offloading in Vehicular Edge Computing
    Yan, Guozhi
    Liu, Chunhui
    Liu, Kai
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 3298 - 3303
  • [42] An adaptive DNN inference acceleration framework with end-edge-cloud collaborative computing
    Liu, Guozhi
    Dai, Fei
    Xu, Xiaolong
    Fu, Xiaodong
    Dou, Wanchun
    Kumar, Neeraj
    Bilal, Muhammad
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 140 : 422 - 435
  • [43] JAVP: Joint-Aware Video Processing with Edge-Cloud Collaboration for DNN Inference
    Yang, Zheming
    Ji, Wen
    Guo, Qi
    Wang, Zhi
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9152 - 9160
  • [44] Task-aware swapping for efficient DNN inference on DRAM-constrained edge systems
    Ji, Cheng
    Zhu, Zongwei
    Wang, Xianmin
    Zhai, Wenjie
    Zong, Xuemei
    Chen, Anqi
    Zhou, Mingliang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (10) : 8155 - 8169
  • [45] JAVP: Joint-Aware Video Processing with Edge-Cloud Collaboration for DNN Inference
    Yang, Zheming
    Ji, Wen
    Guo, Qi
    Wang, Zhi
    MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia, 2023, : 9152 - 9160
  • [46] Energy-Aware Inference Offloading for DNN-Driven Applications in Mobile Edge Clouds
    Xu, Zichuan
    Zhao, Liqian
    Liang, Weifa
    Rana, Omer F.
    Zhou, Pan
    Xia, Qiufen
    Xu, Wenzheng
    Wu, Guowei
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (04) : 799 - 814
  • [47] Exploring In-Memory Accelerators and FPGAs for Latency-Sensitive DNN Inference on Edge Servers
    Suvizi, Ali
    Subramaniam, Suresh
    Lan, Tian
    Venkataramani, Guru
    2024 IEEE CLOUD SUMMIT, CLOUD SUMMIT 2024, 2024, : 1 - 6
  • [48] Murmuration: On-the-fly DNN Adaptation for SLO-Aware Distributed Inference in Dynamic Edge Environments
    Lin, Jieyu
    Li, Minghao
    Zhang, Sai Qian
    Leon-Garcia, Alberto
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 792 - 801
  • [49] Incentive-Aware Partitioning and Offloading Scheme for Inference Services in Edge Computing
    Kim, TaeYoung
    Kim, Chang Kyung
    Lee, Seung-seob
    Lee, Sukyoung
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1580 - 1592
  • [50] Delay-aware power optimization model for mobile edge computing systems
    Yaser Jararweh
    Mahmoud Al-Ayyoub
    Muneera Al-Quraan
    Lo’ai A. Tawalbeh
    Elhadj Benkhelifa
    Personal and Ubiquitous Computing, 2017, 21 : 1067 - 1077