Distributed Multi-Agent Approach for Achieving Energy Efficiency and Computational Offloading in MECNs Using Asynchronous Advantage Actor-Critic

被引:0
|
作者
Khan, Israr [1 ]
Raza, Salman [2 ]
Khan, Razaullah [3 ]
Rehman, Waheed ur [4 ]
Rahman, G. M. Shafiqur [5 ]
Tao, Xiaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Res Ctr Mobile Network Technol, Beijing 100876, Peoples R China
[2] Natl Text Univ, Dept Comp Sci, Faisalabad 37610, Pakistan
[3] Univ Engn & Technol, Dept Comp Sci, Mardan 23200, Pakistan
[4] Univ Peshawar, Dept Comp Sci, Peshawar 25120, Pakistan
[5] Beijing Univ Posts & Telecommun, Key Lab Universal Wireless Commun, Minist Educ, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
deep reinforcement learning; advanced asynchronous advantage actor-critic (A3C); multi-agent system; mobile edge computing; cloud computing; computational offloading; energy efficiency; REINFORCEMENT; ALLOCATION; DESIGN;
D O I
10.3390/electronics12224605
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile edge computing networks (MECNs) based on hierarchical cloud computing have the ability to provide abundant resources to support the next-generation internet of things (IoT) network, which relies on artificial intelligence (AI). To address the instantaneous service and computation demands of IoT entities, AI-based solutions, particularly the deep reinforcement learning (DRL) strategy, have been intensively studied in both the academic and industrial fields. However, there are still many open challenges, namely, the lengthening convergence phenomena of the agent, network dynamics, resource diversity, and mode selection, which need to be tackled. A mixed integer non-linear fractional programming (MINLFP) problem is formulated to maximize computing and radio resources while maintaining quality of service (QoS) for every user's equipment. We adopt the advanced asynchronous advantage actor-critic (A3C) approach to take full advantage of distributed multi-agent-based solutions for achieving energy efficiency in MECNs. The proposed approach, which employs A3C for computing offloading and resource allocation, is shown through numerical results to significantly reduce energy consumption and improve energy efficiency. This method's effectiveness is further shown by comparing it to other benchmarks.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] A New Advantage Actor-Critic Algorithm For Multi-Agent Environments
    Paczolay, Gabor
    Harmati, Istvan
    [J]. 2020 23RD IEEE INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2020,
  • [2] The Implementation of Asynchronous Advantage Actor-Critic with Stigmergy in Network-assisted Multi-agent System
    Chen, Kun
    Li, Rongpeng
    Zhao, Zhifeng
    Zhang, Honggang
    [J]. 2020 12TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2020, : 1082 - 1087
  • [3] Distributed Multi-Agent Reinforcement Learning by Actor-Critic Method
    Heredia, Paulo C.
    Mou, Shaoshuai
    [J]. IFAC PAPERSONLINE, 2019, 52 (20): : 363 - 368
  • [4] Improving sample efficiency in Multi-Agent Actor-Critic methods
    Ye, Zhenhui
    Chen, Yining
    Jiang, Xiaohong
    Song, Guanghua
    Yang, Bowei
    Fan, Sheng
    [J]. APPLIED INTELLIGENCE, 2022, 52 (04) : 3691 - 3704
  • [5] Improving sample efficiency in Multi-Agent Actor-Critic methods
    Zhenhui Ye
    Yining Chen
    Xiaohong Jiang
    Guanghua Song
    Bowei Yang
    Sheng Fan
    [J]. Applied Intelligence, 2022, 52 : 3691 - 3704
  • [6] Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement Learning
    Xiao, Yuchen
    Lyu, Xueguang
    Amato, Christopher
    [J]. 2021 INTERNATIONAL SYMPOSIUM ON MULTI-ROBOT AND MULTI-AGENT SYSTEMS (MRS), 2021, : 155 - 163
  • [7] A multi-agent reinforcement learning using Actor-Critic methods
    Li, Chun-Gui
    Wang, Meng
    Yuan, Qing-Neng
    [J]. PROCEEDINGS OF 2008 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2008, : 878 - 882
  • [8] Accelerated DRL Agent for Autonomous Voltage Control Using Asynchronous Advantage Actor-critic
    Xu, Zhengyuan
    Zan, Yan
    Xu, Chunlei
    Li, Jin
    Shi, Di
    Wang, Zhiwei
    Zhang, Bei
    Duan, Jiajun
    [J]. 2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [9] Toward Resilient Multi-Agent Actor-Critic Algorithms for Distributed Reinforcement Learning
    Lin, Yixuan
    Gade, Shripad
    Sandhu, Romeil
    Liu, Ji
    [J]. 2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 3953 - 3958
  • [10] Asynchronous and Distributed Multi-agent Systems: An Approach Using Actor Model
    Reis, Felipe D.
    Nascimento, Tales B.
    Marcelino, Carolina G.
    Wanner, Elizabeth F.
    Borges, Henrique E.
    Salcedo-Sanz, Sancho
    [J]. OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, OL2A 2022, 2022, 1754 : 701 - 713