Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint

被引:2
|
作者
Chen, Dejun [1 ]
Zeng, Yunxiu [1 ]
Zhang, Yi [1 ]
Li, Shuilin [1 ]
Xu, Kai [1 ]
Yin, Quanjun [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Peoples R China
关键词
deception; deceptiveness; path planning; goal recognition; count-based reinforcement learning; RECOGNITION;
D O I
10.3390/math12131979
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on "dissimulation"-hiding the truth-to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Count-Based Exploration via Embedded State Space for Deep Reinforcement Learning
    Liu, Xinyue
    Li, Qinghua
    Li, Yuangang
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [2] Count-Based Exploration in Feature Space for Reinforcement Learning
    Martin, Jarryd
    Narayanan, Suraj S.
    Everitt, Tom
    Hutter, Marcus
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2471 - 2478
  • [3] A Study of Count-Based Exploration and Bonus for Reinforcement Learning
    Xu, Zhi-Xiong
    Chen, Xi-Liang
    Cao, Lei
    Li, Chen-Xi
    2017 2ND IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYSIS (ICCCBDA 2017), 2017, : 425 - 429
  • [4] #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
    Tang, Haoran
    Houthooft, Rein
    Foote, Davis
    Stooke, Adam
    Chen, Xi
    Duan, Yan
    Schulman, John
    De Turck, Filip
    Abbeel, Pieter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Reinforcement Learning Based Path Planning Method for Underactuated AUV with Sonar Constraint
    Pang, Zhouqi
    Lin, Xiaobo
    Hao, Chengpeng
    Hou, Chaohuan
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2382 - 2387
  • [6] Path planning and dynamic collision avoidance algorithm under COLREGs via deep reinforcement learning
    Xu, Xinli
    Cai, Peng
    Ahmed, Zahoor
    Yellapu, Vidya Sagar
    Zhang, Weidong
    NEUROCOMPUTING, 2022, 468 : 181 - 197
  • [7] Robot path planning based on deep reinforcement learning
    Long, Yinxin
    He, Huajin
    2020 IEEE CONFERENCE ON TELECOMMUNICATIONS, OPTICS AND COMPUTER SCIENCE (TOCS), 2020, : 151 - 154
  • [8] AGV Path Planning Model based on Reinforcement Learning
    Liao, Xiaofei
    Wang, Yang
    Xuan, Yiliang
    Wu, Dequan
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 6722 - 6726
  • [9] Robot path planning algorithm based on reinforcement learning
    Zhang F.
    Li N.
    Yuan R.
    Fu Y.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2018, 46 (12): : 65 - 70
  • [10] Path planning for active SLAM based on deep reinforcement learning under unknown environments
    Wen, Shuhuan
    Zhao, Yanfang
    Yuan, Xiao
    Wang, Zongtao
    Zhang, Dan
    Manfredi, Luigi
    INTELLIGENT SERVICE ROBOTICS, 2020, 13 (02) : 263 - 272