Componentwise Adversarial Attacks

被引:1
|
作者
Beerens, Lucas
Higham, Desmond J. [1 ]
机构
[1] Univ Edinburgh, Sch Math, Edinburgh EH8 9BT, Scotland
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT I | 2023年 / 14254卷
基金
英国工程与自然科学研究理事会;
关键词
backward error; misclassification; stability;
D O I
10.1007/978-3-031-44207-0_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We motivate and test a new adversarial attack algorithm that measures input perturbation size in a relative componentwise manner. The algorithm can be implemented by solving a sequence of linearly-constrained linear least-squares problems, for which high quality software is available. In the image classification context, as a special case the algorithm may be applied to artificial neural networks that classify printed or handwritten text-we show that it is possible to generate hard-to-spot perturbations that cause misclassification by perturbing only the "ink" and hence leaving the background intact. Such examples are relevant to application areas in defence, business, law and finance.
引用
收藏
页码:542 / 545
页数:4
相关论文
共 50 条
  • [21] Adversarial Attacks for Object Detection
    Xu, Bo
    Zhu, Jinlin
    Wang, Danwei
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7281 - 7287
  • [22] Sparse and Imperceivable Adversarial Attacks
    Croce, Francesco
    Hein, Matthias
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4723 - 4731
  • [23] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [24] HIDDEN CONDITIONAL ADVERSARIAL ATTACKS
    Byun, Junyoung
    Shim, Kyujin
    Go, Hyojun
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1306 - 1310
  • [25] Learning to Ignore Adversarial Attacks
    Zhang, Yiming
    Zhou, Yangqiaoyu
    Carton, Samuel
    Tan, Chenhao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2970 - 2984
  • [26] Adversarial Attacks on Time Series
    Karim, Fazle
    Majumdar, Somshubra
    Darabi, Houshang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) : 3309 - 3320
  • [27] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [28] WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS
    Zhao, Qingye
    Chen, Xin
    Zhao, Zhuoyu
    Tang, Enyi
    Li, Xuandong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2734 - 2738
  • [29] Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
    Vadillo, Jon
    Santana, Roberto
    Lozano, Jose A.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [30] Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
    Vadillo, Jon
    Santana, Roberto
    Lozano, Jose A.
    Journal of Machine Learning Research, 2023, 24