Reinforcement Learning-based Adversarial Attacks on Object Detectors using Reward Shaping

被引:1
|
作者
Shi, Zhenbo [1 ]
Yang, Wei [2 ]
Xu, Zhenbo [3 ]
Yu, Zhidong [1 ]
Huang, Liusheng [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Univ Sci & Technol China, Hefei Natl Lab, Hefei, Peoples R China
[3] Beihang Univ, Hangzhou Innovat Inst, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Object Detection; Reinforcement Learning; Adversarial Attack;
D O I
10.1145/3581783.3612304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of object detector attacks, previous methods primarily rely on fixed gradient optimization or patch-based cover techniques, often leading to suboptimal attack performance and excessive distortions. To address these limitations, we propose a novel attack method, Interactive Reinforcement-based Sparse Attack (IRSA), which employs Reinforcement Learning (RL) to discover the vulnerabilities of object detectors and systematically generate erroneous results. Specifically, we formulate the process of seeking optimal margins for adversarial examples as a Markov Decision Process (MDP). We tackle the RL convergence difficulty through innovative reward functions and a composite optimization method for effective and efficient policy training. Moreover, the perturbations generated by IRSA are more subtle and difficult to detect while requiring less computational effort. Our method also demonstrates strong generalization capabilities against various object detectors. In summary, IRSA is a refined, efficient, and scalable interactive, iterative, end-to-end algorithm.
引用
收藏
页码:8424 / 8432
页数:9
相关论文
共 50 条
  • [41] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [42] An Interactive Adversarial Reward Learning-based Spoken Language Understanding System
    Wang, Yu
    Shen, Yilin
    Jin, Hongxia
    INTERSPEECH 2020, 2020, : 1565 - 1569
  • [43] Improving Generalization in Reinforcement Learning-Based Trading by Using a Generative Adversarial Market Model
    Kuo, Chia-Hsuan
    Chen, Chiao-Ting
    Lin, Sin-Jing
    Huang, Szu-Hao
    IEEE ACCESS, 2021, 9 : 50738 - 50754
  • [44] Reward poisoning attacks in deep reinforcement learning based on exploration strategies
    Cai, Kanting
    Zhu, Xiangbin
    Hu, Zhaolong
    NEUROCOMPUTING, 2023, 553
  • [45] Adaptive Safety Shields for Reinforcement Learning-Based Cell Shaping
    Dey, Sumanta
    Mujumdar, Anusha
    Dasgupta, Pallab
    Dey, Soumyajit
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04): : 5034 - 5043
  • [46] Reinforcement Learning-based Response Shaping Control of Dynamical Systems
    Shivani, Chepuri
    Kandath, Harikumar
    2023 11TH INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION, ICCMA, 2023, : 403 - 408
  • [47] Defending Convolutional Neural Network-Based Object Detectors Against Adversarial Attacks
    Cheng, Jeffrey
    Hu, Victor
    2020 9TH IEEE INTEGRATED STEM EDUCATION CONFERENCE (ISEC 2020), 2020,
  • [48] An Adversarial Reinforcement Learning Framework for Robust Machine Learning-based Malware Detection
    Ebrahimi, Mohammadreza
    Li, Weifeng
    Chai, Yidong
    Pacheco, Jason
    Chen, Hsinchun
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 567 - 576
  • [49] AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery
    Chen, Jinyin
    Ge, Jie
    Zheng, Shilian
    Ye, Linhui
    Zheng, Haibin
    Shen, Weiguo
    Yue, Keqiang
    Yang, Xiaoniu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 10698 - 10711
  • [50] Adversarial Attacks on Deep Learning-Based DOA Estimation With Covariance Input
    Yang, Zhuang
    Zheng, Shilian
    Zhang, Luxin
    Zhao, Zhijin
    Yang, Xiaoniu
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1377 - 1381