Energy-efficient Incremental Offloading of Neural Network Computations in Mobile Edge Computing

被引:2
|
作者
Guo, Guangfeng [1 ,2 ]
Zhang, Junxing [1 ]
机构
[1] Inner Mongolia Univ, Coll Comp Sci, Hohhot, Peoples R China
[2] Inner Mongolia Univ Sci & Technol, Baotou Teachers Coll, Baotou, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile Edge Computing; Deep Neural Network; Computation Offloading; Energy Efficient;
D O I
10.1109/GLOBECOM42002.2020.9322504
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Network (DNN) has shown remarkable success in Computer Vision and Augmented Reality. However, battery-powered devices still cannot afford to run state-of-the-art DNNs. Mobile Edge Computing (MEC) is a promising approach to run the DNNs on energy-constrained mobile devices. It uploads the DNN model partitions of the devices to the nearest edge servers on demand, and then offloads DNN computations to the servers to save the energy of the devices. Nevertheless, the existing all-at-once computation offloading faces two great challenges. The first one is how to find the most energy-efficient model partition scheme under different wireless network bandwidths in MEC. The second challenge is how to reduce the time and energy cost of the devices waiting for the servers, since uploading all DNN layers of the optimal partition often lakes time. To meet these challenges, we propose the following solution. First, we build regression-based energy consumption prediction models by profiling the energy consumption of mobile devices under varied wireless network bandwidths. Then, we present an algorithm that finds the most energy-efficient DNN partition scheme based on the established prediction models and performs incremental computation offloading upon the completion of uploading each DNN partition. The experimental results show that our solution improves energy efficiency compared to the current all-at-once approach. Under the 100 Mbps bandwidth, when the model uploading takes 1/3 of the total uploading time, the proposed solution can reduce the energy consumption by around 40%.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Energy-Efficient Computational Offloading for Secure NOMA-Enabled Mobile Edge Computing Networks
    Wang, Haiping
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [42] A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing
    Jiang, Kai
    Zhou, Huan
    Li, Dawei
    Liu, Xuxun
    Xu, Shouzhi
    [J]. 2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
  • [43] Energy Efficient Task Caching and Offloading for Mobile Edge Computing
    Hao, Yixue
    Chen, Min
    Hu, Long
    Hossain, M. Shamim
    Ghoneim, Ahmed
    [J]. IEEE ACCESS, 2018, 6 : 11365 - 11373
  • [44] Energy-efficient Workload Offloading and Power Control in Vehicular Edge Computing
    Zhou, Zhenyu
    Liu, Pengju
    Chang, Zheng
    Xu, Chen
    Zhang, Yan
    [J]. 2018 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), 2018, : 191 - 196
  • [45] Energy-Efficient Task Offloading for Distributed Edge Computing in Vehicular Networks
    Lin, Zhijian
    Yang, Jianjie
    Wu, Celimuge
    Chen, Pingping
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 14056 - 14061
  • [46] Energy-efficient computation offloading strategy with tasks scheduling in edge computing
    Zhang, Yue
    Fu, Jingqi
    [J]. WIRELESS NETWORKS, 2021, 27 (01) : 609 - 620
  • [47] Energy-Efficient Computation Peer Offloading in Satellite Edge Computing Networks
    Zhang, Xinyuan
    Liu, Jiang
    Zhang, Ran
    Huang, Yudong
    Tong, Jincheng
    Xin, Ning
    Liu, Liang
    Xiong, Zehui
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (04) : 3077 - 3091
  • [48] Energy-efficient computation offloading strategy with tasks scheduling in edge computing
    Yue Zhang
    Jingqi Fu
    [J]. Wireless Networks, 2021, 27 : 609 - 620
  • [49] An Energy-Efficient Method for Recurrent Neural Network Inference in Edge Cloud Computing
    Chen, Chao
    Guo, Weiyu
    Wang, Zheng
    Yang, Yongkui
    Wu, Zhuoyu
    Li, Guannan
    [J]. SYMMETRY-BASEL, 2022, 14 (12):
  • [50] Deep Neural Network Task Partitioning and Offloading for Mobile Edge Computing
    Gao, Mingjin
    Cui, Wenqi
    Gao, Di
    Shen, Rujing
    Li, Jun
    Zhou, Yiqing
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,