Bounded Adversarial Attack on Deep Content Features

被引:2
|
作者
Xu, Qiuling [1 ]
Tao, Guanhong [1 ]
Zhang, Xiangyu [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR52688.2022.01477
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel adversarial attack targeting content features in some deep layer, that is, individual neurons in the layer. A naive method that enforces a fixed value/percentage bound for neuron activation values can hardly work and generates very noisy samples. The reason is that the level of perceptual variation entailed by a fixed value bound is non-uniform across neurons and even for the same neuron. We hence propose a novel distribution quantile bound for activation values and a polynomial barrier loss function. Given a benign input, a fixed quantile bound is translated to many value bounds, one for each neuron, based on the distributions of the neuron's activations and the current activation value on the given input. These individualized bounds enable fine-grained regulation, allowing content feature mutations with bounded perceptional variations. Our evaluation on ImageNet and five different model architectures demonstrates that our attack is effective. Compared to seven other latest adversarial attacks in both the pixel space and the feature space, our attack can achieve the state-of-the-art trade-off between attack success rate and imperceptibility. (1)
引用
收藏
页码:15182 / 15191
页数:10
相关论文
共 50 条
  • [1] Adversarial Attack and Defense in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Zheng, Nanning
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5306 - 5324
  • [2] Content-based Unrestricted Adversarial Attack
    Chen, Zhaoyu
    Li, Bo
    Wu, Shuang
    Jiang, Kaixun
    Ding, Shouhong
    Zhang, Wenqiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Deep adversarial attack on target detection systems
    Osahor, Uche M.
    Nasrabadi, Nasser M.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [4] DTFA: Adversarial attack with discrete cosine transform noise and target features on deep neural networks
    Yang, Dong
    Chen, Wei
    Wei, Songjie
    IET IMAGE PROCESSING, 2023, 17 (05) : 1464 - 1477
  • [5] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [6] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [7] Multiuser Adversarial Attack on Deep Learning for OFDM Detection
    Ye, Youjie
    Chen, Yunfei
    Liu, Mingqian
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (12) : 2527 - 2531
  • [8] Deep learning models for electrocardiograms are susceptible to adversarial attack
    Han, Xintian
    Hu, Yuxuan
    Foschini, Luca
    Chinitz, Larry
    Jankelson, Lior
    Ranganath, Rajesh
    NATURE MEDICINE, 2020, 26 (03) : 360 - +
  • [9] Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
    Lin, Yen-Chen
    Hong, Zhang-Wei
    Liao, Yuan-Hong
    Shih, Meng-Li
    Liu, Ming-Yu
    Sun, Min
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3756 - 3762
  • [10] Deep learning models for electrocardiograms are susceptible to adversarial attack
    Xintian Han
    Yuxuan Hu
    Luca Foschini
    Larry Chinitz
    Lior Jankelson
    Rajesh Ranganath
    Nature Medicine, 2020, 26 : 360 - 363