Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

被引:0
|
作者
Li, Xin [1 ]
Li, Xiangrui [1 ]
Pan, Deng [1 ]
Zhu, Dongxiao [1 ]
机构
[1] Wayne State Univ, Dept Comp Sci, Detroit, MI 48202 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available at https://github.com/xinli0928/PC-LC.
引用
收藏
页码:8482 / 8490
页数:9
相关论文
共 50 条
  • [1] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4
  • [2] Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
    Yan, Jiale
    Xu, Yang
    Zhang, Sicong
    Li, Kezi
    Xie, Xiaoyao
    Journal of Computers (Taiwan), 2023, 34 (01) : 29 - 43
  • [3] Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
    Li, Pengcheng
    Yi, Jinfeng
    Zhou, Bowen
    Zhang, Lijun
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2909 - 2915
  • [4] TOWARDS ADVERSARIAL ROBUSTNESS VIA COMPACT FEATURE REPRESENTATIONS
    Shah, Muhammad A.
    Olivier, Raphael
    Raj, Bhiksha
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3845 - 3849
  • [5] Improving Adversarial Robustness via Guided Complement Entropy
    Chen, Hao-Yun
    Liang, Jhao-Hong
    Chang, Shih-Chieh
    Pan, Jia-Yu
    Chen, Yu-Ting
    Wei, Wei
    Juan, Da-Cheng
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4880 - 4888
  • [6] Improving Adversarial Robustness of Detector via Objectness Regularization
    Bao, Jiayu
    Chen, Jiansheng
    Ma, Hongbing
    Ma, Huimin
    Yu, Cheng
    Huang, Yiqing
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 252 - 262
  • [7] Improving Adversarial Robustness via Information Bottleneck Distillation
    Kuang, Huafeng
    Liu, Hong
    Wu, YongJian
    Satoh, Shin'ichi
    Ji, Rongrong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] Improving Adversarial Robustness via Promoting Ensemble Diversity
    Pang, Tianyu
    Xu, Kun
    Du, Chao
    Chen, Ning
    Zhu, Jun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Improving Adversarial Robustness of CNNs via Maximum Margin
    Wu, Jiaping
    Xia, Zhaoqiang
    Feng, Xiaoyi
    APPLIED SCIENCES-BASEL, 2022, 12 (15):
  • [10] Improving Adversarial Robustness via Mutual Information Estimation
    Zhou, Dawei
    Wang, Nannan
    Gao, Xinbo
    Han, Bo
    Wang, Xiaoyu
    Zhan, Yibing
    Liu, Tongliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,