Hardening against adversarial examples with the smooth gradient method

被引:0
|
作者
Alan Mosca
George D. Magoulas
机构
[1] Birkbeck,Department of Computer Science and Information Systems
[2] University of London,undefined
来源
Soft Computing | 2018年 / 22卷
关键词
Adversarial Examples; Fast Gradient Sign Method; Residual Gradient; Input Perturbation; Convolutional Neural Network;
D O I
暂无
中图分类号
学科分类号
摘要
Commonly used methods in deep learning do not utilise transformations of the residual gradient available at the inputs to update the representation in the dataset. It has been shown that this residual gradient, which can be interpreted as the first-order gradient of the input sensitivity at a particular point, may be used to improve generalisation in feed-forward neural networks, including fully connected and convolutional layers. We explore how these input gradients are related to input perturbations used to generate adversarial examples and how the networks that are trained with this technique are more robust to attacks generated with the fast gradient sign method.
引用
收藏
页码:3203 / 3213
页数:10
相关论文
共 50 条
  • [21] On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks
    Dyrmishi, Salijona
    Ghamizi, Salah
    Simonetto, Thibault
    Le Traon, Yves
    Cordy, Maxime
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1384 - 1400
  • [22] Method for Improving Quality of Adversarial Examples
    Duc-Anh Nguyen
    Kha Do Minh
    Duc-Anh Pham
    Pham Ngoc Hung
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2022, : 214 - 225
  • [23] On the Effect of Adversarial Training Against Invariance-based Adversarial Examples
    Rauter, Roland
    Nocker, Martin
    Merkle, Florian
    Schoettle, Pascal
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 54 - 60
  • [24] INOR-An Intelligent noise reduction method to defend against adversarial audio examples
    Guo, Qingli
    Ye, Jing
    Chen, Yiran
    Hu, Yu
    Lan, Yazhu
    Zhang, Guohe
    Li, Xiaowei
    NEUROCOMPUTING, 2020, 401 (401) : 160 - 172
  • [25] Multi-scale Gradient Adversarial Examples Generation Network
    Shi L.
    Zhang X.
    Hong X.
    Li J.
    Ding W.
    Shen C.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (06): : 483 - 496
  • [26] AdvCheck: Characterizing adversarial examples via local gradient checking
    Chen, Ruoxi
    Jin, Haibo
    Chen, Jinyin
    Zheng, Haibin
    Zheng, Shilian
    Yang, Xiaoniu
    Yang, Xing
    COMPUTERS & SECURITY, 2024, 136
  • [27] Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses
    Hashemi, Mohammad
    Cusack, Greg
    Keller, Eric
    AISEC'18: PROCEEDINGS OF THE 11TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, 2018, : 25 - 36
  • [28] Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope
    Wong, Eric
    Kolter, J. Zico
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [29] Advocating for Multiple Defense Strategies Against Adversarial Examples
    Araujo, Alexandre
    Meunier, Laurent
    Pinot, Rafael
    Negrevergne, Benjamin
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 165 - 177
  • [30] On the Defense Against Adversarial Examples Beyond the Visible Spectrum
    Ortiz, Anthony
    Fuentes, Olac
    Rosario, Dalton
    Kiekintveld, Christopher
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 553 - 558