ADAPTIVE WARPING NETWORK FOR TRANSFERABLE ADVERSARIAL ATTACKS

被引:1
|
作者
Son, Minji [1 ]
Kwon, Myung-Joon [1 ]
Kim, Hee-Seon [1 ]
Byun, Junyoung [1 ]
Cho, Seungju [1 ]
Kim, Changick [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
关键词
Adversarial Attacks; Transfer-based Attacks; Transferability; Input Transformation; Warping;
D O I
10.1109/ICIP46576.2022.9897701
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are extremely susceptible to adversarial examples, which are crafted by intentionally adding imperceptible perturbations to clean images. Due to potential threats of adversarial attacks in practice, black-box transfer-based attacks are carefully studied to identify the vulnerability of DNNs. Unfortunately, transfer-based attacks often fail to achieve high transferability because the adversarial examples tend to overfit the source model. Applying input transformation is one of the most effective methods to avoid such overfitting. However, most previous input transformation methods obtain limited transferability because these methods utilize fixed transformations for all images. To solve the problem, we propose an Adaptive Warping Network (AWN), which searches for appropriate warping to the individual data. Specifically, AWN optimizes the warping, which mitigates the effect of adversarial perturbations in each iteration. The adversarial examples are generated to become robust against such strong transformations. Extensive experimental results on the ImageNet dataset demonstrate that AWN outperforms the existing input transformation methods in terms of transferability.
引用
收藏
页码:3056 / 3060
页数:5
相关论文
共 50 条
  • [21] On Adaptive Attacks to Adversarial Example Defenses
    Tramer, Florian
    Carlini, Nicholas
    Brendel, Wieland
    Madry, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [22] Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling
    Emmery, Chris
    Kadar, Akos
    Chrupala, Grzegorz
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 2388 - 2402
  • [23] CommanderUAP: a practical and transferable universal adversarial attacks on speech recognition models
    Sun, Zheng
    Zhao, Jinxiao
    Guo, Feng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2024, 7 (01):
  • [24] Black-box transferable adversarial attacks based on ensemble advGAN
    Huang S.-N.
    Li Y.-X.
    Mao Y.-H.
    Ban A.-Y.
    Zhang Z.-Y.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (10): : 2391 - 2398
  • [25] Feature-aware transferable adversarial attacks against image classification
    Cheng, Shuyan
    Li, Peng
    Han, Keji
    Xu, He
    APPLIED SOFT COMPUTING, 2024, 161
  • [26] Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization
    Yang, Yulong
    Lin, Chenhao
    Li, Qian
    Zhao, Zhengyu
    Fan, Haoran
    Zhou, Dawei
    Wang, Nannan
    Liu, Tongliang
    Shen, Chao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3265 - 3278
  • [27] Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4307 - 4316
  • [28] Transferable Adversarial Attacks against Automatic Modulation Classifier in Wireless Communications
    Hu, Lin
    Jiang, Han
    Li, Wen
    Han, Hao
    Yang, Yang
    Jiao, Yutao
    Wang, Haichao
    Xu, Yuhua
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [29] Cross-Modal Transferable Adversarial Attacks from Images to Videos
    Wei, Zhipeng
    Chen, Jingjing
    Wu, Zuxuan
    Jiang, Yu-Gang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15044 - 15053
  • [30] Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
    Liu, Xinlei
    Xie, Jichao
    Hu, Tao
    Yi, Peng
    Hu, Yuxiang
    Huo, Shumin
    Zhang, Zhen
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)