An Universal Adversarial Attack Method Based on Spherical Projection

被引:1
|
作者
Fan, Chunlong [1 ]
Zhang, Zhimin [1 ]
Qiao, Jianzhong [1 ]
机构
[1] Northeastern Univ, Sch Comp Sci & Engn, Shenyang, Liaoning, Peoples R China
关键词
Adversarial perturbation; neural network; gradient rise; adversarial attack; spherical projection;
D O I
10.1142/S0218126622500384
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial attack on neural networks has become an important problem restricting its security applications, and among adversarial attacks oriented towards the sample set, the universal perturbation design causing most sample output errors is critical to the study. This paper takes the neural network for image classification as the research object, summarizes the existing universal perturbation generation algorithm, proposes a universal perturbation generation algorithm combining batch stochastic gradient rise and spherical projection search, achieves loss function reduction through the iterative training of stochastic gradient rise in batch samples, and limits the universal perturbation search to a high-dimensional sphere with radius epsilon to reduce the search space of universal perturbation. Moreover, the regularized technology is introduced to improve the generation quality of universal perturbations. The experimental results show that compared with the baseline algorithm, the attack success rate increases by more than 10%, the solution efficiency of universal perturbation is improved by one order of magnitude, and the quality controllability of universal perturbation is better.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A General Adversarial Attack Method Based on Random Gradient Ascent and Spherical Projection
    Fan, Chun-Long
    Li, Yan-Da
    Xia, Xiu-Feng
    Qiao, Jian-Zhong
    [J]. Dongbei Daxue Xuebao/Journal of Northeastern University, 2022, 43 (02): : 168 - 175
  • [2] A Survey on Universal Adversarial Attack
    Zhang, Chaoning
    Benz, Philipp
    Lin, Chenguo
    Karjauv, Adil
    Wu, Jing
    Kweon, In So
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4687 - 4694
  • [3] Universal Adversarial Attack on Deep Learning Based Prognostics
    Basak, Arghya
    Rathore, Pradeep
    Nistala, Sri Harsha
    Srinivas, Sagar
    Runkana, Venkataramana
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 23 - 29
  • [4] TransNoise: Transferable Universal Adversarial Noise for Adversarial Attack
    Wei, Yier
    Gao, Haichang
    Wang, Yufei
    Liu, Huan
    Gao, Yipeng
    Luo, Sainan
    Guo, Qianwen
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 193 - 205
  • [5] A variable adversarial attack method based on filtering
    Li, Jiachun
    Hu, Yuchao
    Xia, Fei
    [J]. COMPUTERS & SECURITY, 2023, 134
  • [6] Sample Based Fast Adversarial Attack Method
    Wang, Zhi-Ming
    Gu, Meng-Ting
    Hou, Jia-Hui
    [J]. NEURAL PROCESSING LETTERS, 2019, 50 (03) : 2731 - 2744
  • [7] Adversarial attack method based on loss smoothing
    Li, Meihong
    Jin, Shuang
    Du, Ye
    [J]. Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (02): : 663 - 670
  • [8] Sample Based Fast Adversarial Attack Method
    Zhi-Ming Wang
    Meng-Ting Gu
    Jia-Hui Hou
    [J]. Neural Processing Letters, 2019, 50 : 2731 - 2744
  • [9] Projection-Based Physical Adversarial Attack for Monocular Depth Estimation
    Daimo, Renya
    Ono, Satoshi
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (01) : 31 - 35
  • [10] A method for filtering the attack pairs of adversarial examples based on attack distance
    Liu, Hongyi
    Fang, Yutong
    Wen, Weiping
    [J]. Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2022, 48 (02): : 339 - 347