Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

被引:0
|
作者
Chaudhury, Subhajit [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.
引用
收藏
页码:13714 / 13715
页数:2
相关论文
共 50 条
  • [31] Robustness of Neural Ensembles Against Targeted and Random Adversarial Learning
    Wang, Shir Li
    Shafi, Kamran
    Lokan, Chris
    Abbass, Hussein A.
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2010), 2010,
  • [32] Understanding Attention and Generalization in Graph Neural Networks
    Knyazev, Boris
    Taylor, Graham W.
    Amer, Mohamed R.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [33] Adversarial Robustness of Multi-bit Convolutional Neural Networks
    Frickenstein, Lukas
    Sampath, Shambhavi Balamuthu
    Mori, Pierpaolo
    Vemparala, Manoj-Rohit
    Fasfous, Nael
    Frickenstein, Alexander
    Unger, Christian
    Passerone, Claudio
    Stechele, Walter
    [J]. INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, INTELLISYS 2023, 2024, 824 : 157 - 174
  • [34] A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
    Wang, Yang
    Dong, Bo
    Xu, Ke
    Piao, Haiyin
    Ding, Yufei
    Yin, Baocai
    Yang, Xin
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [35] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [36] Adversarial Robustness of Vision Transformers Versus Convolutional Neural Networks
    Ali, Kazim
    Bhatti, Muhammad Shahid
    Saeed, Atif
    Athar, Atifa
    Al Ghamdi, Mohammed A.
    Almotiri, Sultan H.
    Akram, Samina
    [J]. IEEE ACCESS, 2024, 12 : 105281 - 105293
  • [37] SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
    Marchisio, Alberto
    Nanfa, Giorgio
    Khalid, Faiq
    Hanif, Muhammad Abdullah
    Martina, Maurizio
    Shafique, Muhammad
    [J]. MICROPROCESSORS AND MICROSYSTEMS, 2023, 96
  • [38] Do Wider Neural Networks Really Help Adversarial Robustness?
    Wu, Boxi
    Chen, Jinghui
    Cai, Deng
    He, Xiaofei
    Gu, Quanquan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [39] Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks
    Wu, Yihan
    Bojchevski, Aleksandar
    Huang, Heng
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10417 - 10425
  • [40] Training Neural Networks with Random Noise Images for Adversarial Robustness
    Park, Ji-Young
    Liu, Lin
    Li, Jiuyong
    Liu, Jixue
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3358 - 3362