Switching-aware multi-agent deep reinforcement learning for target interception

被引:1
|
作者
Fan, Dongyu [1 ]
Shen, Haikuo [1 ,2 ]
Dong, Lijing [1 ,2 ,3 ]
机构
[1] Beijing Jiaotong Univ, Sch Mech Elect & Control Engn, Beijing 100044, Peoples R China
[2] Beijing Jiaotong Univ, Key Lab Vehicle Adv Mfg Measuring & Control Techn, Minist Educ, Beijing 100044, Peoples R China
[3] Beijing Inst Technol, Beijing Adv Innovat Ctr Intelligent Robots & Syst, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent system; Reinforcement learning; Deep learning; Switching topology; TRACKING; SYSTEMS; NETWORKS; GAME; GO;
D O I
10.1007/s10489-022-03821-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the multi-agent interception problem under switching topology based on deep reinforcement learning. Due to communication restrictions or network attacks, the connectivity between every two intercepting agents may change during the entire tracking process before the successful interception. That is, the topology of the multi-agent system is switched, which leads to a partial missing or dynamic jump of each agent's observation. To solve this issue, a novel multi-agent level-fusion actor-critic (MALFAC) approach is proposed with a direction assisted (DA) actor and a dimensional pyramid fusion (DPF) critic. Besides, an experience adviser (EA) function is added to the learning process of the actor. Furthermore, a reward factor is proposed to balance the relationship between individual reward and shared reward. Experimental results show that the proposed method performs better than recent algorithms in the multi-agent interception scenarios with switching topologies, which achieves the highest successful interception with the least average steps. The ablation study also verifies the effectiveness of the innovative components in the proposed method. The extensive experimental results demonstrate the scalability of our method in different scenarios.
引用
收藏
页码:7876 / 7891
页数:16
相关论文
共 50 条
  • [41] DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning
    Yuan, Tingting
    Chung, Hwei-Ming
    Yuan, Jie
    Fu, Xiaoming
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11763 - 11771
  • [42] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [43] Multi-Agent Deep Reinforcement Learning for Multi-Object Tracker
    Jiang, Mingxin
    Hai, Tao
    Pan, Zhigeng
    Wang, Haiyan
    Jia, Yinjie
    Deng, Chao
    IEEE ACCESS, 2019, 7 : 32400 - 32407
  • [44] Learning multi-agent communication with double attentional deep reinforcement learning
    Mao, Hangyu
    Zhang, Zhengchao
    Xiao, Zhen
    Gong, Zhibo
    Ni, Yan
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2020, 34 (01)
  • [45] Learning to school in dense configurations with multi-agent deep reinforcement learning
    Zhu, Yi
    Pang, Jian-Hua
    Gao, Tong
    Tian, Fang-Bao
    BIOINSPIRATION & BIOMIMETICS, 2023, 18 (01)
  • [46] Learning multi-agent communication with double attentional deep reinforcement learning
    Hangyu Mao
    Zhengchao Zhang
    Zhen Xiao
    Zhibo Gong
    Yan Ni
    Autonomous Agents and Multi-Agent Systems, 2020, 34
  • [47] FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning
    Jo, Yonghyeon
    Lee, Sunwoo
    Yeom, Junghyuk
    Han, Seungyul
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 12985 - 12994
  • [48] Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration
    Linh Vu
    Tuyen Vu
    Thanh Long Vu
    Srivastava, Anurag
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (02) : 1749 - 1760
  • [49] Transform networks for cooperative multi-agent deep reinforcement learning
    Hongbin Wang
    Xiaodong Xie
    Lianke Zhou
    Applied Intelligence, 2023, 53 : 9261 - 9269
  • [50] Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Song, H. Francis
    Hughes, Edward
    Burch, Neil
    Dunning, Iain
    Whiteson, Shimon
    Botvinick, Matthew M.
    Bowling, Michael
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97