Low-epsilon adversarial attack against a neural network online image stream classifier

被引:8
|
作者
Arjomandi, Hossein Mohasel [1 ]
Khalooei, Mohammad [1 ]
Amirmazlaghani, Maryam [1 ]
机构
[1] Amirkabir Univ Technol, Comp Engn Dept, Tehran, Iran
关键词
Adversarial attack; Image classification; Image stream; Optimization; Regularization;
D O I
10.1016/j.asoc.2023.110760
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An adversary intercepts a stream of images between a sender and a receiver neural network classifier. To minimize its footprint, the adversary only attacks a limited number of images within the stream. The adversary is interested in maximizing the number of successfully conducted attacks among all performed attacks. Upon the arrival of each image and before the arrival of the following image, the adversary must irrevocably decide whether it wants to attack the current image or not. The target model is a fixed deep neural network that may use any form of regularization. The adversary has query access to the target model, which can feed images and obtain the loss, which may contain regularization and classification loss terms. Since this paper's proposed method needs classification loss term alone, it also suggests a novel method in which the adversary estimates the regularization loss term and eliminates it. All images are partitioned into three groups based on their after-attack classification loss and treated according to their group. Moreover, this paper provides some promising test results on various datasets. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Design of robust hyperspectral image classifier based on adversarial training against adversarial attack
    Park I.
    Kim S.
    Journal of Institute of Control, Robotics and Systems, 2021, 27 (06) : 389 - 400
  • [2] Towards an Efficient and Robust Adversarial Attack Against Neural Text Classifier
    Yi, Zibo
    Li, Shasha
    Ma, Jun
    Yu, Jie
    Tan, Yusong
    Wu, Qingbo
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (11)
  • [3] FPGA Adaptive Neural Network Quantization for Adversarial Image Attack Defense
    Lu, Yufeng
    Shi, Xiaokang
    Jiang, Jianan
    Deng, Hanhui
    Wang, Yanwen
    Lu, Jiwu
    Wu, Di
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (12) : 14017 - 14028
  • [4] Adversarial Attack Against Convolutional Neural Network via Gradient Approximation
    Wang, Zehao
    Li, Xiaoran
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 221 - 232
  • [5] Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack
    Zhang, Haibo
    Sakurai, Kouichi
    IEEE ACCESS, 2021, 9 : 169031 - 169043
  • [6] Adversarial Attack on GNN-based SAR Image Classifier
    Ye, Tian
    Kannan, Rajgopal
    Prasanna, Viktor
    Busart, Carl
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [7] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846
  • [8] On the Transferability of Adversarial Attacks against Neural Text Classifier
    Yuan, Liping
    Zheng, Xiaoqing
    Zhou, Yi
    Hsieh, Cho-Jui
    Chang, Kai-Wei
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1612 - 1625
  • [9] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [10] PPNNI: Privacy-Preserving Neural Network Inference Against Adversarial Example Attack
    He, Guanghui
    Ren, Yanli
    He, Gang
    Feng, Guorui
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 4083 - 4096