Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness

被引:0
|
作者
Korkmaz, Ezgi
机构
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from raw high dimensional data via interaction with a given environment has been effectively achieved through the utilization of deep neural networks. Yet the observed degradation in policy performance caused by imperceptible worst-case policy dependent translations along high sensitivity directions (i.e. adversarial perturbations) raises concerns on the robustness of deep reinforcement learning policies. In our paper, we show that these high sensitivity directions do not lie only along particular worst-case directions, but rather are more abundant in the deep neural policy landscape and can be found via more natural means in a black-box setting. Furthermore, we show that vanilla training techniques intriguingly result in learning more robust policies compared to the policies learnt via the state-of-the-art adversarial training techniques. We believe our work lays out intriguing properties of the deep reinforcement learning policy manifold and our results can help to build robust and generalizable deep reinforcement learning policies.
引用
收藏
页码:8369 / 8377
页数:9
相关论文
共 50 条
  • [1] Certified Adversarial Robustness for Deep Reinforcement Learning
    Lutjen, Bjorn
    Everett, Michael
    How, Jonathan P.
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [2] Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
    Everett, Michael
    Lutjens, Bjorn
    How, Jonathan P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4184 - 4198
  • [3] Robust Deep Reinforcement Learning through Adversarial Loss
    Oikarinen, Tuomas
    Zhang, Wang
    Megretski, Alexandre
    Daniel, Luca
    Weng, Tsui-Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems
    Wang, Siyu
    Cao, Yuanjiang
    Chen, Xiaocong
    Yao, Lina
    Wang, Xianzhi
    Sheng, Quan Z.
    FRONTIERS IN BIG DATA, 2022, 5
  • [5] Adversarial robustness of deep reinforcement learning-based intrusion detection
    Merzouk, Mohamed Amine
    Neal, Christopher
    Delas, Josephine
    Yaich, Reda
    Boulahia-Cuppens, Nora
    Cuppens, Frederic
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (06) : 3625 - 3651
  • [6] Robust Deep Reinforcement Learning with Adversarial Attacks Extended Abstract
    Pattanaik, Anay
    Tang, Zhenyi
    Liu, Shuijing
    Bommannan, Gautham
    Chowdhary, Girish
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 2040 - 2042
  • [7] Robust Adversarial Reinforcement Learning
    Pinto, Lerrel
    Davidson, James
    Sukthankar, Rahul
    Gupta, Abhinav
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [8] A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations
    Liu, Qisai
    Lee, Xian Yeow
    Sarkar, Soumik
    AI OPEN, 2024, 5 : 126 - 141
  • [9] Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
    Zhang, Huan
    Chen, Hongge
    Xiao, Chaowei
    Li, Bo
    Liu, Mingyan
    Boning, Duane
    Hsieh, Cho-Jui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] Integrating safety constraints into adversarial training for robust deep reinforcement learning
    Meng, Jinling
    Zhu, Fei
    Ge, Yangyang
    Zhao, Peiyao
    INFORMATION SCIENCES, 2023, 619 : 310 - 323