Componentwise Adversarial Attacks

被引:1
|
作者
Beerens, Lucas
Higham, Desmond J. [1 ]
机构
[1] Univ Edinburgh, Sch Math, Edinburgh EH8 9BT, Scotland
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT I | 2023年 / 14254卷
基金
英国工程与自然科学研究理事会;
关键词
backward error; misclassification; stability;
D O I
10.1007/978-3-031-44207-0_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We motivate and test a new adversarial attack algorithm that measures input perturbation size in a relative componentwise manner. The algorithm can be implemented by solving a sequence of linearly-constrained linear least-squares problems, for which high quality software is available. In the image classification context, as a special case the algorithm may be applied to artificial neural networks that classify printed or handwritten text-we show that it is possible to generate hard-to-spot perturbations that cause misclassification by perturbing only the "ink" and hence leaving the background intact. Such examples are relevant to application areas in defence, business, law and finance.
引用
收藏
页码:542 / 545
页数:4
相关论文
共 50 条
  • [1] Adversarial ink: componentwise backward error attacks on deep learning
    Beerens, Lucas
    Higham, Desmond J.
    IMA JOURNAL OF APPLIED MATHEMATICS, 2023, 89 (01) : 175 - 196
  • [2] ADVERSARIAL ATTACKS ON ADVERSARIAL BANDITS
    Microsoft Azure AI
    不详
    arXiv, 1600,
  • [3] Composite Adversarial Attacks
    Mao, Xiaofeng
    Chen, Yuefeng
    Wang, Shuhui
    Su, Hang
    He, Yuan
    Xue, Hui
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8884 - 8892
  • [4] ON THE REVERSIBILITY OF ADVERSARIAL ATTACKS
    Li, Chau Yi
    Sanchez-Matilla, Ricardo
    Shamsabadi, Ali Shahin
    Mazzon, Riccardo
    Cavallaro, Andrea
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3073 - 3077
  • [5] Functional Adversarial Attacks
    Laidlaw, Cassidy
    Feizi, Soheil
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Adversarial attacks and adversarial robustness in computational pathology
    Narmin Ghaffari Laleh
    Daniel Truhn
    Gregory Patrick Veldhuizen
    Tianyu Han
    Marko van Treeck
    Roman D. Buelow
    Rupert Langer
    Bastian Dislich
    Peter Boor
    Volkmar Schulz
    Jakob Nikolas Kather
    Nature Communications, 13
  • [7] DETECTION OF ADVERSARIAL ATTACKS AND CHARACTERIZATION OF ADVERSARIAL SUBSPACE
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro Lameiras
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3097 - 3101
  • [8] Adversarial attacks and adversarial robustness in computational pathology
    Ghaffari Laleh, Narmin
    Truhn, Daniel
    Veldhuizen, Gregory Patrick
    Han, Tianyu
    van Treeck, Marko
    Buelow, Roman D.
    Langer, Rupert
    Dislich, Bastian
    Boor, Peter
    Schulz, Volkmar
    Kather, Jakob Nikolas
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [9] Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
    He, Fengmei
    Chen, Yihuai
    Chen, Ruidong
    Nie, Weizhi
    IEEE ACCESS, 2023, 11 : 2767 - 2774
  • [10] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350