Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images

被引:8
|
作者
Lin, Gengyou [1 ]
Pan, Zhisong [1 ]
Zhou, Xingyu [2 ]
Duan, Yexin [3 ]
Bai, Wei [1 ]
Zhan, Dazhi [1 ]
Zhu, Leqian [1 ]
Zhao, Gaoqiang [1 ]
Li, Tao [1 ]
机构
[1] Army Engn Univ PLA, Command & Control Engn Coll, Nanjing 210007, Peoples R China
[2] Army Engn Univ PLA, Commun Engn Coll, Nanjing 210007, Peoples R China
[3] Army Mil Transportat Univ PLA, Zhenjiang Campus, Zhenjiang 212000, Peoples R China
基金
中国国家自然科学基金;
关键词
adversarial attack; deep neural network; black-box attack; targeted attack; feature-level attack; ATTENTION; EXAMPLES;
D O I
10.3390/rs15102699
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow features of the model are more capable of reflecting spatial and semantic information such as target contours and textures in the image. The proposed SFA generates ghost data packages for input images and generates critical features by extracting gradients and feature maps at shallow layers of the model. The feature-level loss is then constructed using the critical features from both clean images and target images, which is combined with the end-to-end loss to form a hybrid loss function. By fitting the critical features of the input image at specific shallow layers of the neural network to the target critical features, our attack method generates more powerful and transferable adversarial examples. Experimental results show that the adversarial examples generated by the SFA attack method improved the success rate of single-model attack under a black-box scenario by an average of 3.73%, and 4.61% after combining them with ensemble-model attack without victim models.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability
    Chen, Bin
    Yin, Jiali
    Chen, Shukai
    Chen, Bohao
    Liu, Ximeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
  • [2] Boosting Adversarial Transferability Through Intermediate Feature
    He, Chenghai
    Li, Xiaoqian
    Zhang, Xiaohang
    Zhang, Kai
    Li, Hailing
    Xiong, Gang
    Li, Xuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 28 - 39
  • [3] Boosting Adversarial Transferability via Gradient Relevance Attack
    Zhu, Hegui
    Ren, Yuchen
    Sui, Xiaoyan
    Yang, Lianping
    Jiang, Wuming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4718 - 4727
  • [4] Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
    Xiong, Yifeng
    Lin, Jiadong
    Zhang, Min
    Hopcroft, John E.
    He, Kun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14963 - 14972
  • [5] Enhancing Adversarial Transferability With Intermediate Layer Feature Attack on Synthetic Aperture Radar Images
    Wan, Xuanshen
    Liu, Wei
    Niu, Chaoyang
    Lu, Wanjie
    Li, Yuanli
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 1638 - 1655
  • [6] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [7] Enhancing adversarial attack transferability with multi-scale feature attack
    Sun, Caixia
    Zou, Lian
    Fan, Cien
    Shi, Yu
    Liu, Yifeng
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (02)
  • [8] Boosting Adversarial Transferability via Logits Mixup With Dominant Decomposed Feature
    Weng, Juanjuan
    Luo, Zhiming
    Li, Shaozi
    Lin, Dazhen
    Zhong, Zhun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8939 - 8951
  • [9] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [10] MixCam-attack: Boosting the transferability of adversarial examples with targeted data augmentation
    Guo, Sensen
    Li, Xiaoyu
    Zhu, Peican
    Wang, Baocang
    Mu, Zhiying
    Zhao, Jinxiong
    INFORMATION SCIENCES, 2024, 657