Componentwise Adversarial Attacks

被引:1
|
作者
Beerens, Lucas
Higham, Desmond J. [1 ]
机构
[1] Univ Edinburgh, Sch Math, Edinburgh EH8 9BT, Scotland
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT I | 2023年 / 14254卷
基金
英国工程与自然科学研究理事会;
关键词
backward error; misclassification; stability;
D O I
10.1007/978-3-031-44207-0_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We motivate and test a new adversarial attack algorithm that measures input perturbation size in a relative componentwise manner. The algorithm can be implemented by solving a sequence of linearly-constrained linear least-squares problems, for which high quality software is available. In the image classification context, as a special case the algorithm may be applied to artificial neural networks that classify printed or handwritten text-we show that it is possible to generate hard-to-spot perturbations that cause misclassification by perturbing only the "ink" and hence leaving the background intact. Such examples are relevant to application areas in defence, business, law and finance.
引用
收藏
页码:542 / 545
页数:4
相关论文
共 50 条
  • [41] Boosting adversarial attacks with transformed gradient
    He, Zhengyun
    Duan, Yexin
    Zhang, Wu
    Zou, Junhua
    He, Zhengfang
    Wang, Yunyun
    Pan, Zhisong
    COMPUTERS & SECURITY, 2022, 118
  • [42] Physical Adversarial Attacks by Projecting Perturbations
    Worzyk, Nils
    Kahlen, Hendrik
    Kramer, Oliver
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: IMAGE PROCESSING, PT III, 2019, 11729 : 649 - 659
  • [43] ADVERSARIAL ATTACKS ON GENERATED TEXT DETECTORS
    Su, Pengcheng
    Tu, Rongxin
    Liu, Hongmei
    Qing, Yue
    Kang, Xiangui
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2849 - 2854
  • [44] Escaping Adversarial Attacks with Egyptian Mirrors
    Saukh, Olga
    PROCEEDINGS OF THE 2023 THE 2ND ACM WORKSHOP ON DATA PRIVACY AND FEDERATED LEARNING TECHNOLOGIES FOR MOBILE EDGE NETWORK, FEDEDGE 2023, 2023, : 131 - 136
  • [45] CONTEXTUAL ADVERSARIAL ATTACKS FOR OBJECT DETECTION
    Zhang, Hantao
    Zhou, Wengang
    Li, Houqiang
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [46] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [47] Adversarial Attacks on Speech Separation Systems
    Trinh, Kendrick
    Moh, Melody
    Moh, Teng-Sheng
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 703 - 708
  • [48] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [49] Survey of adversarial attacks on speech recognition
    He Y.
    Hu M.
    Peng Z.
    Deng X.
    Liu S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2023, 51 (02): : 10 - 18
  • [50] Adversarial Attacks on Linear Contextual Bandits
    Garcelon, Evrard
    Roziere, Baptiste
    Meunier, Laurent
    Tarbouriech, Jean
    Teytaud, Olivier
    Lazaric, Alessandro
    Pirotta, Matteo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33