Low-epsilon adversarial attack against a neural network online image stream classifier

被引:8
|
作者
Arjomandi, Hossein Mohasel [1 ]
Khalooei, Mohammad [1 ]
Amirmazlaghani, Maryam [1 ]
机构
[1] Amirkabir Univ Technol, Comp Engn Dept, Tehran, Iran
关键词
Adversarial attack; Image classification; Image stream; Optimization; Regularization;
D O I
10.1016/j.asoc.2023.110760
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An adversary intercepts a stream of images between a sender and a receiver neural network classifier. To minimize its footprint, the adversary only attacks a limited number of images within the stream. The adversary is interested in maximizing the number of successfully conducted attacks among all performed attacks. Upon the arrival of each image and before the arrival of the following image, the adversary must irrevocably decide whether it wants to attack the current image or not. The target model is a fixed deep neural network that may use any form of regularization. The adversary has query access to the target model, which can feed images and obtain the loss, which may contain regularization and classification loss terms. Since this paper's proposed method needs classification loss term alone, it also suggests a novel method in which the adversary estimates the regularization loss term and eliminates it. All images are partitioned into three groups based on their after-attack classification loss and treated according to their group. Moreover, this paper provides some promising test results on various datasets. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] MLP neural network classifier for medical image segmentation
    Jarrar, Manel
    Kerkeni, Asma
    Ben Abdallah, Asma
    Bedoui, Mohamed Hedi
    2016 13TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, IMAGING AND VISUALIZATION (CGIV), 2016, : 88 - 93
  • [32] PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
    Xiang, Chong
    Mahloujifar, Saeed
    Mittal, Prateek
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 2065 - 2082
  • [33] Playing Against Deep-Neural-Network-Based Object Detectors: A Novel Bidirectional Adversarial Attack Approach
    Li X.
    Jiang Y.
    Liu C.
    Liu S.
    Luo H.
    Yin S.
    IEEE Transactions on Artificial Intelligence, 2022, 3 (01): : 20 - 28
  • [34] An improved genetic algorithm and its application in neural network adversarial attack
    Yang, Dingming
    Yu, Zeyu
    Yuan, Hongqiang
    Cui, Yanrong
    PLOS ONE, 2022, 17 (05):
  • [35] Enhancing EEG Signal Classifier Robustness Against Adversarial Attacks Using a Generative Adversarial Network Approach
    Aissa N.E.H.S.B.
    Kerrache C.A.
    Korichi A.
    Lakas A.
    Belkacem A.N.
    IEEE Internet of Things Magazine, 2024, 7 (03): : 44 - 49
  • [36] Performance Improvement of Image-Reconstruction-Based Defense against Adversarial Attack
    Lee, Jungeun
    Yang, Hoeseok
    ELECTRONICS, 2022, 11 (15)
  • [37] Fast Category-Hidden Adversarial Attack Against Semantic Image Segmentation
    Zhu, Yinghui
    Jiang, Yuzhen
    Peng, Zhongxing
    Huang, Wei
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2021, 14 (01) : 1823 - 1830
  • [38] Resilience of Pruned Neural Network Against Poisoning Attack
    Zhao, Bingyin
    Lao, Yingjie
    PROCEEDINGS OF THE 2018 13TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE 2018), 2018, : 78 - 83
  • [39] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE Transactions on Dependable and Secure Computing, 2021, 18 (03): : 1474 - 1488
  • [40] AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
    Kwon, Hyun
    Lee, Jun
    IEEE ACCESS, 2024, 12 : 5345 - 5356