Backdoor Attack on Deep Neural Networks in Perception Domain

被引:0
|
作者
Mo, Xiaoxing [1 ]
Zhang, Leo Yu [2 ]
Sun, Nan [3 ]
Luo, Wei [1 ]
Gao, Shang [1 ]
机构
[1] Deakin Univ, Geelong, Vic, Australia
[2] Griffith Univ, Nathan, Qld, Australia
[3] Univ New South Wales Canberra, Canberra, ACT, Australia
关键词
Deep Neural Networks; Backdoor Attacks; Perception Domain;
D O I
10.1109/IJCNN54540.2023.10191661
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep neural networks (DNNs) are widely deployed in various applications, the security of pretrained DNNs is crucial since backdoors can be introduced through poisoned training. A backdoored DNN model works properly when benign inputs are provided, but it produces targeted misclassification on the inputs with an intended pattern known as a trojan trigger. Current technologies for trigger generation mainly focus on the physical and model domains. In this work, we investigate trojan triggers from the perception domain, especially the physical process of collecting light rays when they pass through the lens and hit the optical sensors. A new type of backdoor attack, Lens Flare attack, is introduced. It concentrates on the perception domain and is more physically plausible and stealthy. Experiments show that the DNNs with Lens Flare backdoor can achieve accuracy comparable to their original counterpart on benign input while misclassifying the input with high certainty if the Lens Flare trigger is present. It is also demonstrated that the Lens Flare backdoor is resistant to state-of-the-art backdoor defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [2] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [3] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [4] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    [J]. APPLIED SOFT COMPUTING, 2023, 143
  • [5] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04) : 883 - 887
  • [6] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    [J]. COMPUTERS & SECURITY, 2024, 136
  • [7] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    [J]. Applied Intelligence, 2023, 53 : 20402 - 20417
  • [8] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    [J]. 2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [9] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [10] Compression-resistant backdoor attack against deep neural networks
    Xue, Mingfu
    Wang, Xin
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (17) : 20402 - 20417