Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

被引:0
|
作者
Anh Tuan Bui [1 ]
Trung Le [1 ]
Zhao, He [1 ]
Montague, Paul [2 ]
deVel, Olivier [2 ]
Abraham, Tamas [2 ]
Dinh Phung [1 ]
机构
[1] Monash Univ, Clayton, Vic, Australia
[2] Def Sci & Technol Grp, Canberra, ACT, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks. An important technique of this approach is to control the transferability of adversarial examples among ensemble members. We propose in this work a simple yet effective strategy to collaborate among committee models of an ensemble model. This is achieved via the secure and insecure sets defined for each model member on a given sample, hence help us to quantify and regularize the transferability. Consequently, our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members, which are two crucial factors for better robustness in our ensemble approach. We conduct extensive and comprehensive experiments to demonstrate that our proposed method outperforms the state-of-the-art ensemble baselines, at the same time can detect a wide range of adversarial examples with a nearly perfect accuracy. Our code is available at: https://github.com/tuananhbui89/Crossing-Collaborative-Ensemble.
引用
收藏
页码:6831 / 6839
页数:9
相关论文
共 50 条
  • [21] Improving Adversarial Robustness via Guided Complement Entropy
    Chen, Hao-Yun
    Liang, Jhao-Hong
    Chang, Shih-Chieh
    Pan, Jia-Yu
    Chen, Yu-Ting
    Wei, Wei
    Juan, Da-Cheng
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4880 - 4888
  • [22] Improving the Adversarial Robustness of NLP Models by Information Bottleneck
    Zhang, Cenyuan
    Zhou, Xiang
    Wan, Yixin
    Zheng, Xiaoqing
    Chang, Kai-Wei
    Hsieh, Cho-Jui
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3588 - 3598
  • [23] IMPROVING ROBUSTNESS TO ADVERSARIAL EXAMPLES BY ENCOURAGING DISCRIMINATIVE FEATURES
    Agarwal, Chirag
    Anh Nguyen
    Schonfeld, Dan
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3801 - 3805
  • [24] Weighted Adaptive Perturbations Adversarial Training for Improving Robustness
    Wang, Yan
    Zhang, Dongmei
    Zhang, Haiyang
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2022, 13630 : 402 - 415
  • [25] An orthogonal classifier for improving the adversarial robustness of neural networks
    Xu, Cong
    Li, Xiang
    Yang, Min
    INFORMATION SCIENCES, 2022, 591 : 251 - 262
  • [26] Improving Adversarial Robustness of Detector via Objectness Regularization
    Bao, Jiayu
    Chen, Jiansheng
    Ma, Hongbing
    Ma, Huimin
    Yu, Cheng
    Huang, Yiqing
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 252 - 262
  • [27] Improving the Adversarial Robustness of Object Detection with Contrastive Learning
    Zeng, Weiwei
    Gao, Song
    Zhou, Wei
    Dong, Yunyun
    Wang, Ruxin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 29 - 40
  • [28] Improving Adversarial Robustness via Information Bottleneck Distillation
    Kuang, Huafeng
    Liu, Hong
    Wu, YongJian
    Satoh, Shin'ichi
    Ji, Rongrong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [29] Improving Robustness of Jet Tagging Algorithms with Adversarial Training
    Stein A.
    Coubez X.
    Mondal S.
    Novak A.
    Schmidt A.
    Computing and Software for Big Science, 2022, 6 (1)
  • [30] Improving Adversarial Robustness of CNNs via Maximum Margin
    Wu, Jiaping
    Xia, Zhaoqiang
    Feng, Xiaoyi
    APPLIED SCIENCES-BASEL, 2022, 12 (15):