LOGICDEF: An Interpretable Defense Framework against Adversarial Examples via Inductive Scene Graph Reasoning

被引:0
|
作者
Yang, Yuan [1 ]
Kerce, James C. [1 ]
Fekri, Faramarz [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep vision models have provided new capability across a spectrum of applications in transportation, manufacturing, agriculture, commerce, and security. However, recent studies have demonstrated that these models are vulnerable to adversarial attack, exposing a risk-of-use in critical applications where untrusted parties have access to the data environment or even directly to the sensor inputs. Existing adversarial defense methods are either limited to specific types of attacks or are too complex to be applied to practical vision models. More importantly, these methods rely on techniques that are not interpretable to humans. In this work, we argue that an effective defense should produce an explanation as to why the system is attacked, and by using a representation that is easily readable by a human user, e.g. a logic formalism. To this end, we propose logic adversarial defense (LOGICDEF), a defense framework that utilizes the scene graph of the image to provide a contextual structure for detecting and explaining object classification. Our framework first mines inductive logic rules from the extracted scene graph, and then uses these rules to construct a defense model that alerts the user when the vision model violates the consistency rules. The defense model is interpretable and its robustness is further enhanced by incorporating existing relational commonsense knowledge from projects such as ConceptNet. In order to handle the hierarchical nature of such relational reasoning, we use a curriculum learning approach based on object taxonomy, yielding additional improvements to training and performance.
引用
收藏
页码:8840 / 8848
页数:9
相关论文
共 50 条
  • [1] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Min Ren
    Yun-Long Wang
    Zhao-Feng He
    [J]. Machine Intelligence Research, 2022, (03) : 209 - 226
  • [2] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Ren, Min
    Wang, Yun-Long
    He, Zhao-Feng
    [J]. MACHINE INTELLIGENCE RESEARCH, 2022, 19 (03) : 209 - 226
  • [3] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Min Ren
    Yun-Long Wang
    Zhao-Feng He
    [J]. Machine Intelligence Research, 2022, 19 : 209 - 226
  • [4] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [5] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    [J]. IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [6] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [7] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    [J]. 2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [8] Generating Adversarial Examples Against Remote Sensing Scene Classification via Feature Approximation
    Zhu, Rui
    Ma, Shiping
    Lian, Jiawei
    He, Linyuan
    Mei, Shaohui
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 10174 - 10187
  • [9] Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning
    Chen, Jin
    Ji, Xiaofeng
    Wu, Xinxiao
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 276 - 284
  • [10] Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
    Anouar Kherchouche
    Sid Ahmed Fezza
    Wassim Hamidouche
    [J]. Neural Computing and Applications, 2022, 34 : 21567 - 21582