An Energy-Efficient Fine-Grained Deep Neural Network Partitioning Scheme for Wireless Collaborative Fog Computing

被引:15
|
作者
Kilcioglu, Emre [1 ]
Mirghasemi, Hamed [1 ]
Stupia, Ivan [1 ]
Vandendorpe, Luc [1 ]
机构
[1] Catholic Univ Louvain, ICTEAM ELEN, B-1348 Ottignies, Belgium
关键词
Manganese; Servers; Collaboration; Computational modeling; Edge computing; Wireless communication; Data models; Convex optimization; deep convolutional neural network; energy efficiency; fog computing; DNN partitioning; wireless collaborative computing; LEARNING INFERENCE; EDGE; CLOUD; INTERNET; OPTIMIZATION; INTELLIGENCE; TUTORIAL;
D O I
10.1109/ACCESS.2021.3084689
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fog computing is a potential solution for heterogeneous resource-constrained mobile devices to collaboratively operate deep learning-driven applications at the edge of the networks, instead of offloading the computations of these applications to the powerful cloud servers thanks to the latency reduction, decentralized structure, and privacy concerns. Compared to the mobile cloud computing concept where computation-intensive deep learning operations are offloaded to the powerful cloud servers, making use of the computing capabilities of resource-constrained devices can improve the delay performance and lessen the need for powerful servers to execute such applications by considering a collaborative fog computing scenario with deep neural network (DNN) partitioning. In this paper, we propose an energy-efficient fine-grained DNN partitioning scheme for wireless collaborative fog computing systems. The proposed scheme includes both layer-based partitioning where the DNN model is divided into layer by layer and horizontal partitioning where the input data of each layer operation is partitioned among multiple devices to encourage parallel computing. A convex optimization problem is formulated to minimize the energy consumption of the collaborative part of the system by optimizing the communication and computation parameters as well as the workload of each participating device and solved by using the primal-dual decomposition and Lagrange duality theory. As can be observed in the simulation results, the proposed optimized scheme makes a notable difference in the energy consumption compared to the non-optimized scenario where the workload distribution is equal for all participating devices but the communication and computation parameters are still optimized, so it is a quite challenging bound to be compared.
引用
收藏
页码:79611 / 79627
页数:17
相关论文
共 50 条
  • [21] Energy-efficient dynamic homomorphic security scheme for fog computing in IoT networks
    Gupta, Sejal
    Garg, Ritu
    Gupta, Nitin
    Alnumay, Waleed S.
    Ghosh, Uttam
    Sharma, Pradip Kumar
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58
  • [22] Energy-efficient hierarchical routing in wireless sensor networks based on fog computing
    Abidoye, Ademola Philip
    Kabaso, Boniface
    [J]. EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2021, 2021 (01)
  • [23] Energy-efficient hierarchical routing in wireless sensor networks based on fog computing
    Ademola Philip Abidoye
    Boniface Kabaso
    [J]. EURASIP Journal on Wireless Communications and Networking, 2021
  • [24] Analysis of Deep Convolutional Neural Network Models for the Fine-Grained Classification of Vehicles
    ul Khairi, Danish
    Ayaz, Ferheen
    Saeed, Nagham
    Ahsan, Kamran
    Ali, Syed Zeeshan
    [J]. FUTURE TRANSPORTATION, 2023, 3 (01): : 133 - 149
  • [25] Innovative Deep Neural Network Modeling for Fine-Grained Chinese Entity Recognition
    Liu, Jingang
    Xia, Chunhe
    Yan, Haihua
    Xu, Wenjing
    [J]. ELECTRONICS, 2020, 9 (06) : 1 - 16
  • [26] Energy-Efficient Collaborative Offloading in NOMA-Enabled Fog Computing for Internet of Things
    Feng, Weiyang
    Zhang, Ning
    Lin, Siyu
    Li, Shichao
    Wang, Zhe
    Ai, Bo
    Zhong, Zhangdui
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 13794 - 13807
  • [27] Research on the application of artificial neural network in the fine-grained software rejuvenation of computing system
    Wang, Zhan
    Guo, Cheng-Hao
    Liu, Feng-Yu
    Zhang, Hong
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2008, 31 (07): : 1268 - 1275
  • [28] An Energy-Efficient Deep Neural Network Accelerator Design
    Jung, Jueun
    Lee, Kyuho Jason
    [J]. 2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 272 - 276
  • [29] An Energy-Efficient 3D CMP Design with Fine-Grained Voltage Scaling
    Zhao, Jishen
    Dong, Xiangyu
    Xie, Yuan
    [J]. 2011 DESIGN, AUTOMATION & TEST IN EUROPE (DATE), 2011, : 539 - 542
  • [30] A Fine-Grained, Uniform, Energy-Efficient Delay Element for FD-SOI Technologies
    Singhvi, Ajay
    Moreira, Matheus T.
    Tadros, Ramy N.
    Calazans, Ney L. V.
    Beerel, Peter A.
    [J]. 2015 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI, 2015, : 27 - 32