Hardening against adversarial examples with the smooth gradient method

被引:0
|
作者
Alan Mosca
George D. Magoulas
机构
[1] Birkbeck,Department of Computer Science and Information Systems
[2] University of London,undefined
来源
Soft Computing | 2018年 / 22卷
关键词
Adversarial Examples; Fast Gradient Sign Method; Residual Gradient; Input Perturbation; Convolutional Neural Network;
D O I
暂无
中图分类号
学科分类号
摘要
Commonly used methods in deep learning do not utilise transformations of the residual gradient available at the inputs to update the representation in the dataset. It has been shown that this residual gradient, which can be interpreted as the first-order gradient of the input sensitivity at a particular point, may be used to improve generalisation in feed-forward neural networks, including fully connected and convolutional layers. We explore how these input gradients are related to input perturbations used to generate adversarial examples and how the networks that are trained with this technique are more robust to attacks generated with the fast gradient sign method.
引用
收藏
页码:3203 / 3213
页数:10
相关论文
共 50 条
  • [1] Hardening against adversarial examples with the smooth gradient method
    Mosca, Alan
    Magoulas, George D.
    SOFT COMPUTING, 2018, 22 (10) : 3203 - 3213
  • [2] Smooth adversarial examples
    Zhang, Hanwei
    Avrithis, Yannis
    Furon, Teddy
    Amsaleg, Laurent
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [3] Smooth adversarial examples
    Hanwei Zhang
    Yannis Avrithis
    Teddy Furon
    Laurent Amsaleg
    EURASIP Journal on Information Security, 2020
  • [4] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [5] Fast Gradient Scaled Method for Generating Adversarial Examples
    Xu, Zhefeng
    Luo, Zhijian
    Mu, Jinlong
    6TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE, ICIAI2022, 2022, : 189 - 193
  • [6] DCAL: A New Method for Defending Against Adversarial Examples
    Lin, Xiaoyu
    Cao, Chunjie
    Wang, Longjuan
    Liu, Zhiyuan
    Li, Mengqian
    Ma, Haiying
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II, 2022, 13339 : 38 - 50
  • [7] Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding
    Zhou H.
    Wang Y.
    Tan Y.-A.
    Wu S.
    Zhao Y.
    Zhang Q.
    Li Y.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 412 - 419
  • [8] Generate adversarial examples by adaptive moment iterative fast gradient sign method
    Zhang, Jiebao
    Qian, Wenhua
    Nie, Rencan
    Cao, Jinde
    Xu, Dan
    APPLIED INTELLIGENCE, 2023, 53 (01) : 1101 - 1114
  • [9] Generate adversarial examples by adaptive moment iterative fast gradient sign method
    Jiebao Zhang
    Wenhua Qian
    Rencan Nie
    Jinde Cao
    Dan Xu
    Applied Intelligence, 2023, 53 : 1101 - 1114
  • [10] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699