An Energy-Efficient Fine-Grained Deep Neural Network Partitioning Scheme for Wireless Collaborative Fog Computing

被引:15
|
作者
Kilcioglu, Emre [1 ]
Mirghasemi, Hamed [1 ]
Stupia, Ivan [1 ]
Vandendorpe, Luc [1 ]
机构
[1] Catholic Univ Louvain, ICTEAM ELEN, B-1348 Ottignies, Belgium
来源
IEEE ACCESS | 2021年 / 9卷
关键词
Manganese; Servers; Collaboration; Computational modeling; Edge computing; Wireless communication; Data models; Convex optimization; deep convolutional neural network; energy efficiency; fog computing; DNN partitioning; wireless collaborative computing; LEARNING INFERENCE; EDGE; CLOUD; INTERNET; OPTIMIZATION; INTELLIGENCE; TUTORIAL;
D O I
10.1109/ACCESS.2021.3084689
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fog computing is a potential solution for heterogeneous resource-constrained mobile devices to collaboratively operate deep learning-driven applications at the edge of the networks, instead of offloading the computations of these applications to the powerful cloud servers thanks to the latency reduction, decentralized structure, and privacy concerns. Compared to the mobile cloud computing concept where computation-intensive deep learning operations are offloaded to the powerful cloud servers, making use of the computing capabilities of resource-constrained devices can improve the delay performance and lessen the need for powerful servers to execute such applications by considering a collaborative fog computing scenario with deep neural network (DNN) partitioning. In this paper, we propose an energy-efficient fine-grained DNN partitioning scheme for wireless collaborative fog computing systems. The proposed scheme includes both layer-based partitioning where the DNN model is divided into layer by layer and horizontal partitioning where the input data of each layer operation is partitioned among multiple devices to encourage parallel computing. A convex optimization problem is formulated to minimize the energy consumption of the collaborative part of the system by optimizing the communication and computation parameters as well as the workload of each participating device and solved by using the primal-dual decomposition and Lagrange duality theory. As can be observed in the simulation results, the proposed optimized scheme makes a notable difference in the energy consumption compared to the non-optimized scenario where the workload distribution is equal for all participating devices but the communication and computation parameters are still optimized, so it is a quite challenging bound to be compared.
引用
收藏
页码:79611 / 79627
页数:17
相关论文
共 50 条
  • [1] Energy-efficient Oriented Approximate Quantization Scheme for Fine-Grained Sparse Neural Network Acceleration
    Yu, Tianyang
    Wu, Bi
    Chen, Ke
    Yan, Chenggang
    Liu, Weiqiang
    [J]. 2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 762 - 769
  • [2] A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment
    Xiao, Min
    Zhou, Jing
    Liu, Xuejiao
    Jiang, Mingda
    [J]. SENSORS, 2017, 17 (06)
  • [3] Deadline aware and energy-efficient scheduling algorithm for fine-grained tasks in mobile edge computing
    Lakhan, Abdullah
    Mohammed, Mazin Abed
    Rashid, Ahmed N.
    Kadry, Seifedine
    Abdulkareem, Karrar Hameed
    [J]. INTERNATIONAL JOURNAL OF WEB AND GRID SERVICES, 2022, 18 (02) : 168 - 193
  • [4] Fine-Grained Energy-Efficient Consolidation in SDN Networks and Devices
    Bolla, Raffaele
    Bruschi, Roberto
    Davoli, Franco
    Lombardo, Chiara
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2015, 12 (02): : 132 - 145
  • [5] An Energy-Efficient Sparse Deep-Neural-Network Learning Accelerator With Fine-Grained Mixed Precision of FP8-FP16
    Lee, Jinsu
    Lee, Juhyoung
    Han, Donghyeon
    Lee, Jinmook
    Park, Gwangtae
    Yoo, Hoi-Jun
    [J]. IEEE SOLID-STATE CIRCUITS LETTERS, 2019, 2 (11): : 232 - 235
  • [6] CachinMobile: An Energy-Efficient Users Caching Scheme for Fog Computing
    Wang, Siming
    Huang, Xumin
    Liu, Yi
    Yu, Rong
    [J]. 2016 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2016,
  • [7] An Energy-Efficient Collaborative Caching Scheme for 5G Wireless Network
    Furqan, Muhammad
    Yan, Wen
    Zhang, Cheng
    Iqbal, Shahid
    Jan, Qasim
    Huang, Yongming
    [J]. IEEE ACCESS, 2019, 7 : 156907 - 156916
  • [8] Fine-grained Vehicle Recognition by Deep Convolutional Neural Network
    Huang, Kun
    Zhang, Bailing
    [J]. 2016 9TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2016), 2016, : 465 - 470
  • [9] Energy-Efficient Edge-Based Network Partitioning Scheme for Wireless Sensor Networks
    Venkateswarlu, Muni K.
    Kandasamy, A.
    Chandrasekaran, K.
    [J]. 2013 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), 2013, : 1017 - 1022
  • [10] A hybrid neural network approach for fine-grained emotion classification and computing
    Zhang, Wei
    Wang, Meng
    Zhu, Yanchun
    Wang, Jian
    Ghei, Nasor
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (03) : 3081 - 3091