Energy Efficient Double Critic Deep Deterministic Policy Gradient Framework for Fog Computing

被引:0
|
作者
Krishnamurthy, Bhargavi [1 ]
Shiva, Sajjan G. [2 ]
机构
[1] Siddaganga Inst Technol, Dept Comp Sci, Tumakuru, Karnataka, India
[2] Univ Memphis, Dept Comp Sci, Memphis, TN 38152 USA
关键词
Deterministic Policy Gradient; Fog computing; Energy; Q-learning; Double Critic;
D O I
10.1109/AIIoT54504.2022.9817157
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays the data is growing at a faster pace and the big data applications are required to be more agile and flexible. There is a need for a decentralized model to carry out the required substantial amount of computation across edge devices as they has led to the innovation of fog computing. Energy consumption among the edge devices is one of the potential threatening issues in fog computing. Their high energy demand also contributes to higher computation cost. In this paper Double Critic (DC) approach is enforced over the Deep Deterministic Policy Gradient (DDPG) technique to design the DC-DDPG framework which formulates high quality energy efficiency policies for fog computing. The performance of the proposed framework is outstanding compared to existing works based on the metrics like energy consumption, response time, total cost, and throughput. They are measured under two different fog computing scenarios i.e., fog layer with multiple entities in a region and fog layer with multiple entities in multiple regions. Mathematical modeling reveals that the energy efficiency policies formulated are of high quality as they satisfy the quality assurance metrics, such as empirical correctness, robustness, model relevance, and data privacy.
引用
收藏
页码:521 / 526
页数:6
相关论文
共 50 条
  • [1] Deep Deterministic Policy Gradient With Compatible Critic Network
    Wang, Di
    Hu, Mengqi
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4332 - 4344
  • [2] A multi-critic deep deterministic policy gradient UAV path planning
    Wu, Runjia
    Gu, Fangqing
    Huang, Jie
    [J]. 2020 16TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2020), 2020, : 6 - 10
  • [3] DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
    Gao, Tianhan
    Gao, Shen
    Xu, Jun
    Zhao, Qihui
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (04):
  • [4] Efficient experience replay based deep deterministic policy gradient for AGC dispatch in integrated energy system
    Li, Jiawen
    Yu, Tao
    Zhang, Xiaoshun
    Li, Fusheng
    Lin, Dan
    Zhu, Hanxin
    [J]. APPLIED ENERGY, 2021, 285
  • [5] Deep Deterministic Policy Gradient Based on Double Network Prioritized Experience Replay
    Kang, Chaohai
    Rong, Chuiting
    Ren, Weijian
    Huo, Fengcai
    Liu, Pengyun
    [J]. IEEE ACCESS, 2021, 9 : 60296 - 60308
  • [6] Integration of design and control for renewable energy systems with an application to anaerobic digestion: A deep deterministic policy gradient framework
    Mendiola-Rodriguez, Tannia A.
    Ricardez-Sandoval, Luis A.
    [J]. ENERGY, 2023, 274
  • [7] Policy Space Noise in Deep Deterministic Policy Gradient
    Yan, Yan
    Liu, Quan
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 624 - 634
  • [8] Efficient Path Planning for Mobile Robot Based on Deep Deterministic Policy Gradient
    Gong, Hui
    Wang, Peng
    Ni, Cui
    Cheng, Nuo
    [J]. SENSORS, 2022, 22 (09)
  • [9] Deep Deterministic Policy Gradient for Portfolio Management
    Khemlichi, Firdaous
    Chougrad, Hiba
    Khamlichi, Youness Idrissi
    El Boushaki, Abdessamad
    Ben Ali, Safae Elhaj
    [J]. 2020 6TH IEEE CONGRESS ON INFORMATION SCIENCE AND TECHNOLOGY (IEEE CIST'20), 2020, : 424 - 429
  • [10] Mutual Deep Deterministic Policy Gradient Learning
    Sun, Zhou
    [J]. 2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 508 - 513