A Causal View on Robustness of Neural Networks

被引:0
|
作者
Zhang, Cheng [1 ]
Zhang, Kun [2 ]
Li, Yingzhen [1 ]
机构
[1] Microsoft Res, Redmond, WA 98052 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models possible manipulations on certain causes leading to changes in the observed effect. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Robustness in biological neural networks
    Kalampokis, A
    Kotsavasiloglou, C
    Argyrakis, P
    Baloyannis, S
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2003, 317 (3-4) : 581 - 590
  • [2] Robustness Verification in Neural Networks
    Wurm, Adrian
    [J]. INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, PT II, CPAIOR 2024, 2024, 14743 : 263 - 278
  • [3] Causal Abstractions of Neural Networks
    Geiger, Atticus
    Lu, Hanson
    Icard, Thomas
    Potts, Christopher
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Verification of Neural Networks' Global Robustness
    Kabaha, Anan
    Cohen, Dana Drachsler
    [J]. PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2024, 8 (OOPSLA):
  • [5] The geometry of robustness in spiking neural networks
    Calaim, Nuno
    Dehmelt, Florian A.
    Goncalves, Pedro J.
    Machens, Christian K.
    [J]. ELIFE, 2022, 11
  • [6] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [7] Probabilistic Robustness Quantification of Neural Networks
    Kishan, Gopi
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15966 - 15967
  • [8] Wasserstein distributional robustness of neural networks
    Bai, Xingjian
    He, Guangyi
    Jiang, Yifan
    Obloj, Jan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Noise robustness in multilayer neural networks
    Copelli, M
    Eichhorn, R
    Kinouchi, O
    Biehl, M
    Simonetti, R
    Riegler, P
    Caticha, N
    [J]. EUROPHYSICS LETTERS, 1997, 37 (06): : 427 - 432
  • [10] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57