Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

被引:0
|
作者
Chaudhury, Subhajit [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.
引用
收藏
页码:13714 / 13715
页数:2
相关论文
共 50 条
  • [41] Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks
    Wu, Yihan
    Bojchevski, Aleksandar
    Huang, Heng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10417 - 10425
  • [42] AdvQuNN: A Methodology for Analyzing the Adversarial Robustness of Quanvolutional Neural Networks
    El Maouaki, Walid
    Marchisio, Alberto
    Said, Taoufik
    Bennai, Mohamed
    Shafique, Muhammad
    2024 IEEE INTERNATIONAL CONFERENCE ON QUANTUM SOFTWARE, IEEE QSW 2024, 2024, : 175 - 181
  • [43] Training Neural Networks with Random Noise Images for Adversarial Robustness
    Park, Ji-Young
    Liu, Lin
    Li, Jiuyong
    Liu, Jixue
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3358 - 3362
  • [44] Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
    Nomura, Osamu
    Sakemi, Yusuke
    Hosomi, Takeo
    Morie, Takashi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3640 - 3644
  • [45] Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks
    Ma, Linhai
    Liang, Liang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 240
  • [46] Generalization of Convolutional Neural Networks for ECG Classification Using Generative Adversarial Networks
    Shaker, Abdelrahman M.
    Tantawi, Manal
    Shedeed, Howida A.
    Tolba, Mohamed F.
    IEEE ACCESS, 2020, 8 : 35592 - 35605
  • [47] On the Relationship between Generalization and Robustness to Adversarial Examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    SYMMETRY-BASEL, 2021, 13 (05):
  • [48] Adversarial self-training for robustness and generalization
    Li, Zhuorong
    Wu, Minghui
    Jin, Canghong
    Yu, Daiwei
    Yu, Hongchuan
    PATTERN RECOGNITION LETTERS, 2024, 185 : 117 - 123
  • [49] Robustness and Generalization via Generative Adversarial Training
    Poursaeed, Omid
    Jiang, Tianxing
    Yang, Harry
    Belongie, Serge
    Lim, Ser-Nam
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15691 - 15700
  • [50] On the Robustness of Neural-Enhanced Video Streaming against Adversarial Attacks
    Zhou, Qihua
    Guo, Jingcai
    Guo, Song
    Li, Ruibin
    Zhang, Jie
    Wang, Bingjie
    Xu, Zhenda
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 17123 - 17131