A Fast Decentralized Scheduling Method of Cooperative Localization Based on Actor-Critic Deep Reinforcement

被引:0
|
作者
Di, Xinyue [1 ]
Guan, Yalin [1 ]
Yu, Weijia [1 ]
Lin, Heyun [2 ]
机构
[1] Commun Univ China, Beijing, Peoples R China
[2] Guangxi Power Grid Dispatching Control Ctr, Nanning, Peoples R China
关键词
vehicular localization; cooperative localization; scheduling problem; deep reinforcement learning; actor-critic algorithm;
D O I
10.1109/ICICSE58435.2023.10211593
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the emergence of more and more automated vehicles, localization of vehicles has attracted a lot of attention. Among multiple localization methods, cooperative localization is very attractive due to its high coverage and accuracy. However, it is costly and leads to larger delays to exhaustively measure and exchange information between all adjacent vehicles. Thus, it is a challenge to schedule the transmissions for the cooperative localization. In this paper, we describe the cooperative localization as a partially observable Markov process and propose an actorcritic deep reinforcement learning algorithm to bring the vehicles to a given localization accuracy threshold as quickly as possible. The proposed algorithm allows the transmissions to be optimally scheduled in a distributed manner. Simulation results show that, compared with random, greedy, and two existing deep reinforcement learning algorithms, the proposed algorithm has better performance and is more adaptable to large-scale complex networks.
引用
收藏
页码:26 / 31
页数:6
相关论文
共 50 条
  • [1] DAG-based workflows scheduling using Actor-Critic Deep Reinforcement Learning
    Koslovski, Guilherme Piegas
    Pereira, Kleiton
    Albuquerque, Paulo Roberto
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 150 : 354 - 363
  • [2] A Prioritized objective actor-critic method for deep reinforcement learning
    Ngoc Duy Nguyen
    Thanh Thi Nguyen
    Peter Vamplew
    Richard Dazeley
    Saeid Nahavandi
    [J]. Neural Computing and Applications, 2021, 33 : 10335 - 10349
  • [3] A Prioritized objective actor-critic method for deep reinforcement learning
    Nguyen, Ngoc Duy
    Nguyen, Thanh Thi
    Vamplew, Peter
    Dazeley, Richard
    Nahavandi, Saeid
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (16): : 10335 - 10349
  • [4] Integrated Actor-Critic for Deep Reinforcement Learning
    Zheng, Jiaohao
    Kurt, Mehmet Necip
    Wang, Xiaodong
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 505 - 518
  • [5] Swarm Reinforcement Learning Method Based on an Actor-Critic Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    [J]. SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 279 - 288
  • [6] Actor-Critic Deep Reinforcement Learning for Solving Job Shop Scheduling Problems
    Liu, Chien-Liang
    Chang, Chuan-Chin
    Tseng, Chun-Jan
    [J]. IEEE ACCESS, 2020, 8 : 71752 - 71762
  • [7] Decentralized Scheduling for Cooperative Localization With Deep Reinforcement Learning
    Peng, Bile
    Seco-Granados, Gonzalo
    Steinmetz, Erik
    Frohle, Markus
    Wymeersch, Henk
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) : 4295 - 4305
  • [8] An effective deep actor-critic reinforcement learning method for solving the flexible job shop scheduling problem
    School of Computer Science, Hunan University of Technology, Hunan, Zhuzhou
    412007, China
    [J]. Neural Comput. Appl., 2024, 20 (11877-11899): : 11877 - 11899
  • [9] Visual Navigation with Actor-Critic Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Zhu, Yuanheng
    Zhang, Qichao
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [10] Stochastic Integrated Actor-Critic for Deep Reinforcement Learning
    Zheng, Jiaohao
    Kurt, Mehmet Necip
    Wang, Xiaodong
    [J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35 (05) : 6654 - 6666