Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model

被引:0
|
作者
Kong, Wei [1 ]
机构
[1] Natl Key Lab Sci & Technol Informat Syst Secur, Beijing, Peoples R China
关键词
Adversarial Attack; Diversity; Robustness and Security; Test Deep Learning Model;
D O I
10.1109/ICCD58817.2023.00013
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing optimized adversarial attacks rely on the ability to search for perturbation within lp norm while keeping maximized loss for highly non-convex loss functions. Random initialization perturbation and the steepest gradient direction strategy are efficient techniques to prevent falling into local optima but compromise the capability of diversity exploration. Therefore, we introduce the Diversity-Driven Adversarial Attack (DAA), which incorporates Output Diversity Strategy (ODS) and diverse initialization gradient direction into the optimized adversarial attack algorithm, aiming to refine the inherent properties of the adversarial examples (AEs). More specifically, we design a diversity-promoting regularizer to penalize the insignificant distance between initialization gradient directions based on the version of ODS. Extensive experiments demonstrate that DAA can efficiently improve existing coverage criteria without sacrificing the performance of attack success rate, which implies that DAA can implicitly explore more internal model logic of DL model.
引用
收藏
页码:13 / 20
页数:8
相关论文
共 50 条
  • [1] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [2] Adversarial examples: attacks and defences on medical deep learning systems
    Murali Krishna Puttagunta
    S. Ravi
    C Nelson Kennedy Babu
    [J]. Multimedia Tools and Applications, 2023, 82 : 33773 - 33809
  • [3] Adversarial examples: attacks and defences on medical deep learning systems
    Puttagunta, Murali Krishna
    Ravi, S.
    Babu, C. Nelson Kennedy
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) : 33773 - 33809
  • [4] A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples
    Gwonsang Ryu
    Daeseon Choi
    [J]. Applied Intelligence, 2023, 53 : 9174 - 9187
  • [5] A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples
    Ryu, Gwonsang
    Choi, Daeseon
    [J]. APPLIED INTELLIGENCE, 2023, 53 (08) : 9174 - 9187
  • [6] The Problem of the Adversarial Examples in Deep Learning
    Zhang, Si-Si
    Zuo, Xin
    Liu, Jian-Wei
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2019, 42 (08): : 1886 - 1904
  • [7] Analysing Adversarial Examples for Deep Learning
    Jung, Jason
    Akhtar, Naveed
    Hassan, Ghulam
    [J]. VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, 2021, : 585 - 592
  • [8] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    [J]. ENGINEERING, 2020, 6 (03) : 346 - 360
  • [9] Boosting Model Inversion Attacks With Adversarial Examples
    Zhou, Shuai
    Zhu, Tianqing
    Ye, Dayong
    Yu, Xin
    Zhou, Wanlei
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1451 - 1468
  • [10] On the vulnerability of deep learning to adversarial attacks for camera model identification
    Marra, F.
    Gragnaniello, D.
    Verdoliva, L.
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 65 : 240 - 247