Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model

被引:0
|
作者
Kong, Wei [1 ]
机构
[1] Natl Key Lab Sci & Technol Informat Syst Secur, Beijing, Peoples R China
关键词
Adversarial Attack; Diversity; Robustness and Security; Test Deep Learning Model;
D O I
10.1109/ICCD58817.2023.00013
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing optimized adversarial attacks rely on the ability to search for perturbation within lp norm while keeping maximized loss for highly non-convex loss functions. Random initialization perturbation and the steepest gradient direction strategy are efficient techniques to prevent falling into local optima but compromise the capability of diversity exploration. Therefore, we introduce the Diversity-Driven Adversarial Attack (DAA), which incorporates Output Diversity Strategy (ODS) and diverse initialization gradient direction into the optimized adversarial attack algorithm, aiming to refine the inherent properties of the adversarial examples (AEs). More specifically, we design a diversity-promoting regularizer to penalize the insignificant distance between initialization gradient directions based on the version of ODS. Extensive experiments demonstrate that DAA can efficiently improve existing coverage criteria without sacrificing the performance of attack success rate, which implies that DAA can implicitly explore more internal model logic of DL model.
引用
收藏
页码:13 / 20
页数:8
相关论文
共 50 条
  • [21] Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors
    Kurniawan, Ade
    Ohsita, Yuichi
    Murata, Masayuki
    SENSORS, 2022, 22 (22)
  • [22] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [23] ADVERSARIAL EXAMPLES FOR GOOD: ADVERSARIAL EXAMPLES GUIDED IMBALANCED LEARNING
    Zhang, Jie
    Zhang, Lei
    Li, Gang
    Wu, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 136 - 140
  • [24] Explaining Deep Learning Models with Constrained Adversarial Examples
    Moore, Jonathan
    Hammerla, Nils
    Watkins, Chris
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2019, 11670 : 43 - 56
  • [25] Detecting Operational Adversarial Examples for Reliable Deep Learning
    Zhao, Xingyu
    Huang, Wei
    Schewe, Sven
    Dong, Yi
    Huang, Xiaowei
    51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS - SUPPLEMENTAL VOL (DSN 2021), 2021, : 5 - 6
  • [26] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [27] Adversarial Examples Detection for XSS Attacks Based on Generative Adversarial Networks
    Zhang, Xueqin
    Zhou, Yue
    Pei, Songwen
    Zhuge, Jingjing
    Chen, Jiahao
    IEEE ACCESS, 2020, 8 (08): : 10989 - 10996
  • [28] Understanding adversarial attacks on observations in deep reinforcement learning
    You, Qiaoben
    Ying, Chengyang
    Zhou, Xinning
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (05)
  • [29] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh R.
    Recent Advances in Computer Science and Communications, 2023, 16 (07)
  • [30] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576