Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography

被引:1
|
作者
Liu, Peng [1 ]
Zhang, Shuyi [1 ]
Yao, Chuanjian [1 ]
Ye, Wenzhe [1 ]
Li, Xianxian [1 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR56361.2022.9956521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the world of cyber security, backdoor attacks are widely used. These attacks work by injecting a hidden backdoor into training samples to mislead models into making incorrect judgments for achieving the effect of the attack. However, since the triggers in backdoor attacks are relatively single, defenders can easily detect backdoor triggers of different corrupted samples based on the same behavior. In addition, most current work considers image classification as the object of backdoor attacks, and there is almost no related research on speaker verification. This paper proposes a novel audio steganography-based personalized trigger backdoor attack that embeds hidden trigger techniques into deep neural networks. Specifically, the backdoor speaker verification uses a pre-trained audio steganography network that employs specific triggers for different samples to implicitly write personalized information to all corrupted samples. This personalized method can significantly improve the concealment of the attack and the success rate of the attack. In addition, only the frequency and pitch were modified and the structure of the attacked model was left unaltered, making the attack behavior stealthy. The proposed method provides a new attack direction for speaker verification. Through extensive experiments, we verified the effectiveness of the proposed method.
引用
下载
收藏
页码:68 / 74
页数:7
相关论文
共 50 条
  • [21] Defense-Resistant Backdoor Attacks Against Deep Neural Networks in Outsourced Cloud Environment
    Gong, Xueluan
    Chen, Yanjiao
    Wang, Qian
    Huang, Huayang
    Meng, Lingshuo
    Shen, Chao
    Zhang, Qian
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) : 2617 - 2631
  • [22] Automated Segmentation to Make Hidden Trigger Backdoor Attacks Robust against Deep Neural Networks
    Ali, Saqib
    Ashraf, Sana
    Yousaf, Muhammad Sohaib
    Riaz, Shazia
    Wang, Guojun
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [23] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [24] Detecting Backdoor Attacks via Class Difference in Deep Neural Networks
    Kwon, Hyun
    IEEE ACCESS, 2020, 8 : 191049 - 191056
  • [25] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [26] Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments
    Chen, Yanjiao
    Gong, Xueluan
    Wang, Qian
    Di, Xing
    Huang, Huayang
    IEEE NETWORK, 2020, 34 (05): : 141 - 147
  • [27] Toward Backdoor Attacks for Image Captioning Model in Deep Neural Networks
    Kwon, Hyun
    Lee, Sanghyun
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [28] Deep Residual Neural Networks for Image in Audio Steganography (Workshop Paper)
    Agarwal, Shivam
    Venkatraman, Siddarth
    2020 IEEE SIXTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2020), 2020, : 430 - 434
  • [29] Detecting Backdoor Attacks on Deep Neural Networks Based on Model Parameters Analysis
    Ma, Mingyuan
    Li, Hu
    Kuang, Xiaohui
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 630 - 637
  • [30] A Survey of Backdoor Attacks and Defenses on Neural Networks
    Wang, Xu-Tong
    Yin, Jie
    Liu, Chao-Ge
    Xu, Chen-Chen
    Huang, Hao
    Wang, Zhi
    Zhang, Fang-Jiao
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (08): : 1713 - 1743