Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

被引:0
|
作者
Chaudhury, Subhajit [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.
引用
收藏
页码:13714 / 13715
页数:2
相关论文
共 50 条
  • [1] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
    Karantzas, Nikos
    Besier, Emma
    Caro, Josue Ortega
    Pitkow, Xaq
    Tolias, Andreas S.
    Patel, Ankit B.
    Anselmi, Fabio
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [4] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [5] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, José
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    [J]. arXiv, 2023,
  • [6] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [8] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [9] Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation
    Wang, Ke
    Chen, Zicong
    Dang, Xilin
    Fan, Xuan
    Han, Xuming
    Chen, Chien-Ming
    Ding, Weiping
    Yiu, Siu-Ming
    Weng, Jian
    [J]. PATTERN RECOGNITION, 2023, 143
  • [10] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    [J]. Machine Vision and Applications, 2024, 35