Improving Robustness of Deep Reinforcement Learning Agents: Environment Attack based on the Critic Network

被引:0
|
作者
Schott, Lucas [1 ,2 ]
Hajri, Hatem [1 ]
Lamprier, Sylvain [2 ]
机构
[1] IRT SystemX, Palaiseau, France
[2] Sorbonne Univ, ISIR, Paris, France
关键词
Deep Reinforcement Learning; Adversarial Training; Robustness;
D O I
10.1109/IJCNN55064.2022.9892901
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To improve robustness of deep reinforcement learning agents, a line of recent works focus on producing disturbances of the dynamics of the environment. Existing approaches of the literature to generate such disturbances are environment adversarial reinforcement learning methods. These methods set the problem as a two-player game between the protagonist agent, which learns to perform a task in an environment, and the adversary agent, which learns to disturb the dynamics of the considered environment to make the protagonist agent fail. Alternatively, we propose to build on gradient-based adversarial attacks, usually used for classification tasks for instance, that we apply on the critic network of the protagonist to identify efficient disturbances of the dynamics of the environment. Rather than training an adversary agent, which usually reveals as very complex and unstable, we leverage the knowledge of the critic network of the protagonist, to dynamically increase the complexity of the task at each step of the learning process. We show that our method, while being faster and lighter, leads to significantly better improvements in robustness of the policy than existing methods of the literature.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity
    Marchetti, Marco
    Ho, Edmond S. L.
    [J]. ADVANCES IN CYBERSECURITY, CYBERCRIMES, AND SMART EMERGING TECHNOLOGIES, 2023, 4 : 85 - 96
  • [2] Learning key steps to attack deep reinforcement learning agents
    Yu, Chien-Min
    Chen, Ming-Hsin
    Lin, Hsuan-Tien
    [J]. MACHINE LEARNING, 2023, 112 (05) : 1499 - 1522
  • [3] Learning key steps to attack deep reinforcement learning agents
    Chien-Min Yu
    Ming-Hsin Chen
    Hsuan-Tien Lin
    [J]. Machine Learning, 2023, 112 : 1499 - 1522
  • [4] Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
    Lin, Yen-Chen
    Hong, Zhang-Wei
    Liao, Yuan-Hong
    Shih, Meng-Li
    Liu, Ming-Yu
    Sun, Min
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3756 - 3762
  • [5] Deep Reinforcement Learning on Autonomous Driving Policy With Auxiliary Critic Network
    Wu, Yuanqing
    Liao, Siqin
    Liu, Xiang
    Li, Zhihang
    Lu, Renquan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (07) : 3680 - 3690
  • [6] Improving the robustness of reinforcement learning for a multi-robot system environment
    Yasuda, T
    Ohkura, K
    [J]. SOFT COMPUTING AS TRANSDISCIPLINARY SCIENCE AND TECHNOLOGY, 2005, : 263 - 272
  • [7] Deep Reinforcement Learning in VizDoom via DQN and Actor-Critic Agents
    Bakhanova, Maria
    Makarov, Ilya
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2021, PT I, 2021, 12861 : 138 - 150
  • [8] An advanced actor critic deep reinforcement learning technique for gamification of WiFi environment
    Shakya, Vandana
    Choudhary, Jaytrilok
    Singh, Dhirendra Pratap
    [J]. WIRELESS NETWORKS, 2023, 30 (09) : 7239 - 7256
  • [9] Robustness and performance of Deep Reinforcement Learning
    Al-Nima, Raid Rafi Omar
    Han, Tingting
    Al-Sumaidaee, Saadoon Awad Mohammed
    Chen, Taolue
    Woo, Wai Lok
    [J]. APPLIED SOFT COMPUTING, 2021, 105
  • [10] Adversarial Attack for Deep Reinforcement Learning Based Demand Response
    Wan, Zhiqiang
    Li, Hepeng
    Shuai, Hang
    Sun, Yan
    He, Haibo
    [J]. 2021 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2021,