Deep Multi-Agent Reinforcement Learning With Minimal Cross-Agent Communication for SFC Partitioning

被引:1
|
作者
Pentelas, Angelos [1 ,2 ]
De Vleeschauwer, Danny [1 ]
Chang, Chia-Yu [1 ]
De Schepper, Koen [1 ]
Papadimitriou, Panagiotis [2 ]
机构
[1] Nokia Bell Labs, B-2018 Antwerp, Belgium
[2] Univ Macedonia, Dept Appl Informat, Thessaloniki 54636, Greece
基金
欧盟地平线“2020”;
关键词
Topology; Servers; Resource management; Virtual links; Network topology; Network function virtualization; Multi-agent systems; Reinforcement learning; Multi-agent reinforcement learning; network function virtualization; self-learning orchestration;
D O I
10.1109/ACCESS.2023.3269576
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network Function Virtualization (NFV) decouples network functions from the underlying specialized devices, enabling network processing with higher flexibility and resource efficiency. This promotes the use of virtual network functions (VNFs), which can be grouped to form a service function chain (SFC). A critical challenge in NFV is SFC partitioning (SFCP), which is mathematically expressed as a graph-to-graph mapping problem. Given its NP-hardness, SFCP is commonly solved by approximation methods. Yet, the relevant literature exhibits a gradual shift towards data-driven SFCP frameworks, such as (deep) reinforcement learning (RL). In this article, we initially identify crucial limitations of existing RL-based SFCP approaches. In particular, we argue that most of them stem from the centralized implementation of RL schemes. Therefore, we devise a cooperative deep multi-agent reinforcement learning (DMARL) scheme for decentralized SFCP, which fosters the efficient communication of neighboring agents. Our simulation results (i) demonstrate that DMARL outperforms a state-of-the-art centralized double deep $Q$ -learning algorithm, (ii) unfold the fundamental behaviors learned by the team of agents, (iii) highlight the importance of information exchange between agents, and (iv) showcase the implications stemming from various network topologies on the DMARL efficiency.
引用
收藏
页码:40384 / 40398
页数:15
相关论文
共 50 条
  • [41] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE ACCESS, 2020, 8 : 119000 - 119009
  • [42] Multi-Agent Deep Reinforcement Learning in Vehicular OCC
    Islam, Amirul
    Musavian, Leila
    Thomos, Nikolaos
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [43] Teaching on a Budget in Multi-Agent Deep Reinforcement Learning
    Ilhan, Ercument
    Gow, Jeremy
    Perez-Liebana, Diego
    2019 IEEE CONFERENCE ON GAMES (COG), 2019,
  • [44] Research Progress of Multi-Agent Deep Reinforcement Learning
    Ding, Shi-Feiu
    Du, Weiu
    Zhang, Jianu
    Guo, Li-Liu
    Ding, Ding
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (07): : 1547 - 1567
  • [45] A Transfer Learning Framework for Deep Multi-Agent Reinforcement Learning
    Yi Liu
    Xiang Wu
    Yuming Bo
    Jiacun Wang
    Lifeng Ma
    IEEE/CAA Journal of Automatica Sinica, 2024, 11 (11) : 2346 - 2348
  • [46] A Transfer Learning Framework for Deep Multi-Agent Reinforcement Learning
    Liu, Yi
    Wu, Xiang
    Bo, Yuming
    Wang, Jiacun
    Ma, Lifeng
    IEEE/CAA Journal of Automatica Sinica, 2024, 11 (11) : 2346 - 2348
  • [47] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [48] Multi-Agent Deep Reinforcement Learning for Cooperative Edge Caching via Hybrid Communication
    Wang, Fei
    Emara, Salma
    Kaplan, Isidor
    Li, Baochun
    Zeyl, Timothy
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1206 - 1211
  • [49] Multi-agent deep reinforcement learning with type-based hierarchical group communication
    Hao Jiang
    Dianxi Shi
    Chao Xue
    Yajie Wang
    Gongju Wang
    Yongjun Zhang
    Applied Intelligence, 2021, 51 : 5793 - 5808
  • [50] Multi-agent deep reinforcement learning with type-based hierarchical group communication
    Jiang, Hao
    Shi, Dianxi
    Xue, Chao
    Wang, Yajie
    Wang, Gongju
    Zhang, Yongjun
    APPLIED INTELLIGENCE, 2021, 51 (08) : 5793 - 5808