Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint

被引:3
|
作者
Chen, Dejun [1 ]
Zeng, Yunxiu [1 ]
Zhang, Yi [1 ]
Li, Shuilin [1 ]
Xu, Kai [1 ]
Yin, Quanjun [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Peoples R China
关键词
deception; deceptiveness; path planning; goal recognition; count-based reinforcement learning; RECOGNITION;
D O I
10.3390/math12131979
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on "dissimulation"-hiding the truth-to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Robot path planning in dynamic environment based on reinforcement learning
    Zhuang, Xiao-Dong
    Meng, Qing-Chun
    Wei, Tian-Bin
    Wang, Xu-Zhu
    Tan, Rui
    Li, Xiao-Jing
    Journal of Harbin Institute of Technology (New Series), 2001, 8 (03) : 253 - 255
  • [22] A Deep Reinforcement Learning Based Approach for AGVs Path Planning
    Guo, Xinde
    Ren, Zhigang
    Wu, Zongze
    Lai, Jialun
    Zeng, Deyu
    Xie, Shengli
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 6833 - 6838
  • [23] A Reinforcement Learning Based Online Coverage Path Planning Algorithm
    Carvalho, Jose Pedro
    Pedro Aguiar, A.
    2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC, 2023, : 81 - 86
  • [24] A decentralized path planning model based on deep reinforcement learning
    Guo, Dong
    Ji, Shouwen
    Yao, Yanke
    Chen, Cheng
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 117
  • [25] A UAV Path Planning Method Based on Deep Reinforcement Learning
    Li, Yibing
    Zhang, Sitong
    Ye, Fang
    Jiang, Tao
    Li, Yingsong
    2020 IEEE USNC-CNC-URSI NORTH AMERICAN RADIO SCIENCE MEETING (JOINT WITH AP-S SYMPOSIUM), 2020, : 93 - 94
  • [26] Real Time Path Planning of Robot using Deep Reinforcement Learning
    Raajan, Jeevan
    Srihari, P., V
    Satya, Jayadev P.
    Bhikkaji, B.
    Pasumarthy, Ramkrishna
    IFAC PAPERSONLINE, 2020, 53 (02): : 15602 - 15607
  • [27] Real-time local path planning strategy based on deep distributional reinforcement learning
    Du, Shengli
    Zhu, Zexing
    Wang, Xuefang
    Han, Honggui
    Qiao, Junfei
    NEUROCOMPUTING, 2024, 599
  • [28] Path-Planning Method Based on Reinforcement Learning for Cooperative Two-Crane Lift Considering Load Constraint
    An, Jianqi
    Ou, Huimin
    Wu, Min
    Chen, Xin
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (04): : 2913 - 2923
  • [29] Path Planning for UAV Ground Target Tracking via Deep Reinforcement Learning
    Li, Bohao
    Wu, Yunjie
    IEEE ACCESS, 2020, 8 (29064-29074) : 29064 - 29074
  • [30] Observability-based Energy Efficient Path Planning with Background Flow via Deep Reinforcement Learning
    Mei, Jiazhong
    Kutz, J. Nathan
    Brunton, Steven L.
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 4364 - 4371