Backdoor Defense via Deconfounded Representation Learning

被引:5
|
作者
Zhang, Zaixi [1 ,2 ]
Liu, Qi [1 ,2 ]
Wang, Zhicai [4 ]
Lu, Zepu [4 ]
Hu, Qingyong [3 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Anhui Prov Key Lab Big Data Anal & Applicat, Hefei, Peoples R China
[2] State Key Lab Cognit Intelligence, Hefei, Anhui, Peoples R China
[3] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[4] Univ Sci & Technol China, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.01177
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are recently shown to be vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by injecting a few poisoned examples into the training dataset. While extensive efforts have been made to detect and remove backdoors from backdoored DNNs, it is still not clear whether a backdoor-free clean model can be directly obtained from poisoned datasets. In this paper, we first construct a causal graph to model the generation process of poisoned data and find that the backdoor attack acts as the confounder, which brings spurious associations between the input images and target labels, making the model predictions less reliable. Inspired by the causal understanding, we propose the Causality-inspired Backdoor Defense (CBD), to learn deconfounded representations for reliable classification. Specifically, a backdoored model is intentionally trained to capture the confounding effects. The other clean model dedicates to capturing the desired causal effects by minimizing the mutual information with the confounding representations from the backdoored model and employing a sample-wise re-weighting scheme. Extensive experiments on multiple benchmark datasets against 6 state-of-the-art attacks verify that our proposed defense method is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples. Further analysis shows that CBD can also resist potential adaptive attacks. The code is available at https://github.com/zaixizhang/CBD.
引用
收藏
页码:12228 / 12238
页数:11
相关论文
共 50 条
  • [31] CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack
    Chen J.
    Xiong H.
    Ma H.
    Zheng Y.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (04): : 154 - 166
  • [32] Backdoor Attacks by Leveraging Latent Representation in Competitive Learning for Resistance to Removal
    Iwahana, Kazuki
    Yanai, Naoto
    Inomata, Atsuo
    Fujiwara, Toru
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2025, E108A (03) : 254 - 266
  • [33] Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
    Zhu, Mingli
    Wei, Shaokui
    Zha, Hongyuan
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Exploiting Layerwise Feature Representation Similarity For Backdoor Defence in Federated Learning
    Walter, Kane
    Nepal, Surya
    Kanhere, Salil
    COMPUTER SECURITY-ESORICS 2024, PT IV, 2024, 14985 : 354 - 374
  • [35] Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data
    Chen, Zekai
    Yu, Shengxing
    Fan, Mingyuan
    Liu, Ximeng
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 693 - 707
  • [36] DLP: towards active defense against backdoor attacks with decoupled learning process
    Zonghao Ying
    Bin Wu
    Cybersecurity, 6
  • [37] ACQ: Few-shot Backdoor Defense via Activation Clipping and Quantizing
    Jin, Yulin
    Zhang, Xiaoyu
    Lou, Jian
    Chen, Xiaofeng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5410 - 5418
  • [38] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [40] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)