UAV Assisted Cooperative Caching on Network Edge Using Multi-Agent Actor-Critic Reinforcement Learning

被引:12
|
作者
Araf, Sadman [1 ]
Saha, Adittya Soukarjya [1 ]
Kazi, Sadia Hamid [1 ]
Tran, Nguyen H. H. [2 ]
Alam, Md. Golam Rabiul [1 ]
机构
[1] Brac Univ, Dept Comp Sci & Engn, Dhaka 1212, Bangladesh
[2] Univ Sydney, Fac Engn, Sch Comp Sci, Sydney, NSW 2006, Australia
关键词
Base stations; Servers; Reinforcement learning; Cooperative caching; Vehicle dynamics; Computational modeling; Cloud computing; Cooperative edge caching; multi-acccess edge computing; multi-agent actor-critic; reinforcement learning; unmanned aerial vehicle (UAV); COMMUNICATION; MANAGEMENT;
D O I
10.1109/TVT.2022.3209079
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent times, caching at edge nodes is a well-known technique to overcome the limitation of strict latency, which simultaneously improves users' Quality of Experience (QoE). However, choosing an appropriate caching policy and content placement poses another significant issue that has been acknowledged in this research. Conventional caching policies that are experimented with at the edge do not consider the dynamic and stochastic characteristics of edge caching. As a result, we have proposed a cooperative deep reinforcement learning algorithm that deals with the dynamic nature of content demand. It also ensures efficient use of storage through the cooperation between nodes. In addition, previous works on cooperative caching have assumed the users to be static and didn't consider the mobile nature of users. Therefore, we have proposed UAVs as aerial Base Stations (UAV-BS) to assist in peak hours where a ground base station is insufficient to support the surge in user requests. In this novel research, we have demonstrated the cooperation between aerial and Ground Base Stations (GBS) and aimed at maximizing the global cache hit ratio. Simulations have shown that our proposed Cooperative Multi-Agent Actor-Critic algorithm outperforms conventional and reinforcement learning based caching methods and achieves a state-of-the-art global cache hit ratio when there is a surge in user requests. Thus, it opens the door for further research on cooperative caching in joint air and ground architecture.
引用
收藏
页码:2322 / 2337
页数:16
相关论文
共 50 条
  • [21] Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet of Vehicles
    Jiang, Kai
    Zhou, Huan
    Zeng, Deze
    Wu, Jie
    [J]. 2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, : 455 - 463
  • [22] COOPERATIVE SCENARIOS FOR MULTI-AGENT REINFORCEMENT LEARNING IN WIRELESS EDGE CACHING
    Garg, Navneet
    Ratnarajah, Tharmalingam
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3435 - 3439
  • [23] Multi-Agent Actor-Critic with Hierarchical Graph Attention Network
    Ryu, Heechang
    Shin, Hayong
    Park, Jinkyoo
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7236 - 7243
  • [24] Caching Transient Content for IoT Sensing: Multi-Agent Soft Actor-Critic
    Wu, Xiongwei
    Li, Xiuhua
    Li, Jun
    Ching, P. C.
    Leung, Victor C. M.
    Poor, H. Vincent
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (09) : 5886 - 5901
  • [25] Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
    Lowe, Ryan
    Wu, Yi
    Tamar, Aviv
    Harb, Jean
    Abbeel, Pieter
    Mordatch, Igor
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [26] Multi-Agent Actor-Critic for Cooperative Resource Allocation in Vehicular Networks
    Hammami, Nessrine
    Nguyen, Kim Khoa
    [J]. PROCEEDINGS OF THE 2022 14TH IFIP WIRELESS AND MOBILE NETWORKING CONFERENCE (WMNC 2022), 2022, : 93 - 100
  • [27] Proactive Content Caching Based on Actor-Critic Reinforcement Learning for Mobile Edge Networks
    Jiang, Wei
    Feng, Daquan
    Sun, Yao
    Feng, Gang
    Wang, Zhenzhong
    Xia, Xiang-Gen
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) : 1239 - 1252
  • [28] Multi-Agent Reinforcement Learning with General Utilities via Decentralized Shadow Reward Actor-Critic
    Zhang, Junyu
    Bedi, Amrit Singh
    Wang, Mengdi
    Koppel, Alec
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9031 - 9039
  • [29] Entropy regularized actor-critic based multi-agent deep reinforcement learning for stochastic games
    Hao, Dong
    Zhang, Dongcheng
    Shi, Qi
    Li, Kai
    [J]. INFORMATION SCIENCES, 2022, 617 : 17 - 40
  • [30] A fuzzy Actor-Critic reinforcement learning network
    Wang, Xue-Song
    Cheng, Yu-Hu
    Yi, Jian-Qiang
    [J]. INFORMATION SCIENCES, 2007, 177 (18) : 3764 - 3781