Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

被引:0
|
作者
Chaudhury, Subhajit [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.
引用
收藏
页码:13714 / 13715
页数:2
相关论文
共 50 条
  • [1] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
    Karantzas, Nikos
    Besier, Emma
    Caro, Josue Ortega
    Pitkow, Xaq
    Tolias, Andreas S.
    Patel, Ankit B.
    Anselmi, Fabio
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [4] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, José
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    arXiv, 2023,
  • [6] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [7] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [9] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [10] Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation
    Wang, Ke
    Chen, Zicong
    Dang, Xilin
    Fan, Xuan
    Han, Xuming
    Chen, Chien-Ming
    Ding, Weiping
    Yiu, Siu-Ming
    Weng, Jian
    PATTERN RECOGNITION, 2023, 143