Formation Tracking of Spatiotemporal Multiagent Systems: A Decentralized Reinforcement Learning Approach

被引:1
|
作者
Liu, Tianrun [1 ]
Chen, Yang-Yang [1 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Training; Reinforcement learning; Artificial neural networks; Observers; Orbits; Spatiotemporal phenomena; Safety; Numerical models; Optimization; Multi-agent systems;
D O I
10.1109/MSMC.2024.3401404
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article investigates the formation tracking problem for discrete-time uncertain spatiotemporal multiagent systems (MASs). Note that the common multiagent reinforcement learning (MARL) method requires the actions and states of all agents to train the centralized critic; hence, this method may be impractical in constrained communication. Therefore, a decentralized RL framework is proposed that combines a neural network boundary approximation distributed observer (NNBADO) and an intelligent nonaffine leader (INL). As a result, the formation tracking problem for each agent can be modeled as a partially observable Markov decision process (POMDP). A novel RL formation tracking algorithm is designed based on a fusion reward scheme synthesizing the orbit tracking and formation objectives. The experiment results show that our algorithm can improve the formation accuracy.
引用
收藏
页码:52 / 60
页数:9
相关论文
共 50 条
  • [31] Effect of reinforcement learning on coordination of multiagent systems
    Bukkapatnam, S
    Gao, G
    NETWORK INTELLIGENCE: INTERNET-BASED MANUFACTURING, 2000, 4208 : 31 - 41
  • [32] Coordination in multiagent reinforcement learning systems by virtual reinforcement signals
    Kamal, M.
    Murata, Junichi
    INTERNATIONAL JOURNAL OF KNOWLEDGE-BASED AND INTELLIGENT ENGINEERING SYSTEMS, 2007, 11 (03) : 181 - 191
  • [33] Scalable Reinforcement Learning for Multiagent Networked Systems
    Qu, Guannan
    Wierman, Adam
    Li, Na
    OPERATIONS RESEARCH, 2022, 70 (06) : 3601 - 3628
  • [34] Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems
    Hao, Jianye
    Leung, Ho-Fung
    Ming, Zhong
    ACM TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS, 2015, 9 (04)
  • [35] Adaptive Multigradient Recursive Reinforcement Learning Event-Triggered Tracking Control for Multiagent Systems
    Li, Hongyi
    Wu, Ying
    Chen, Mou
    Lu, Renquan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 144 - 156
  • [36] Adaptive Event-Triggered Bipartite Formation for Multiagent Systems via Reinforcement Learning
    Zhao, Huarong
    Shan, Jinjun
    Peng, Li
    Yu, Hongnian
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17817 - 17828
  • [37] Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults
    Wu, Yuxia
    Liang, Hongjing
    Xuan, Shuxing
    Ahn, Choon Ki
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (03): : 1935 - 1947
  • [38] A Study on Cooperative Action Selection Considering Unfairness in Decentralized Multiagent Reinforcement Learning
    Matsui, Toshihiro
    Matsuo, Hiroshi
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2017, : 88 - 95
  • [39] A Decentralized Approach to Intrusion Detection in Dynamic Networks of the Internet of Things Based on Multiagent Reinforcement Learning with Interagent Interaction
    M. O. Kalinin
    E. I. Tkacheva
    Automatic Control and Computer Sciences, 2023, 57 : 1025 - 1032
  • [40] N-learning: A reinforcement learning paradigm for multiagent systems
    Mansfield, M
    Collins, JJ
    Eaton, M
    Collins, T
    AI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3809 : 684 - 694