Multi-layer Feature Augmentation Based Transferable Adversarial Examples Generation for Speaker Recognition

被引:0
|
作者
Li, Zhuhai [1 ]
Zhang, Jie [1 ]
Guo, Wu [1 ]
机构
[1] Univ Sci & Technol China, NERC SLIP, Hefei 230027, Peoples R China
关键词
Adversarial Attack; Transferability; Speaker Recognition;
D O I
10.1007/978-981-97-5591-2_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples that almost remain imperceptible for human can mislead practical speaker recognition systems. However, most existing adversaries generated by substitute models have a poor transferability to attack the unseen victim models. To tackle this problem, in this work we propose a multilayer feature augmentation method to improve the transferability of adversarial examples. Specifically, we apply data augmentation on the intermediate-layer feature maps of the substitute model to create diverse pseudo victim models. By attacking the ensemble of the substitute model and the corresponding augmented models, the proposed method can help the adversarial examples avoid overfitting, resulting in more transferable adversarial examples. Experimental results on the VoxCeleb dataset verify the effectiveness of the proposed approach for the speaker identification and speaker verification tasks.
引用
收藏
页码:373 / 385
页数:13
相关论文
共 50 条
  • [1] Adversarial Examples in Multi-Layer Random ReLU Networks
    Bartlett, Peter L.
    Bubeck, Sebastien
    Cherapanamjeri, Yeshwanth
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Hierarchical feature transformation attack: Generate transferable adversarial examples for face recognition
    Li, Yuanbo
    Hu, Cong
    Wang, Rui
    Wu, Xiaojun
    APPLIED SOFT COMPUTING, 2025, 172
  • [3] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049
  • [4] Rethinking multi-spatial information for transferable adversarial attacks on speaker recognition systems
    Zhang, Junjian
    Tan, Hao
    Wang, Le
    Qian, Yaguan
    Gu, Zhaoquan
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (03) : 620 - 631
  • [5] Transferable universal adversarial perturbations against speaker recognition systems
    Liu, Xiaochen
    Tan, Hao
    Zhang, Junjian
    Li, Aiping
    Gu, Zhaoquan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (03):
  • [6] Feature Space Perturbations Yield More Transferable Adversarial Examples
    Inkawhich, Nathan
    Wen, Wei
    Li, Hai
    Chen, Yiran
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7059 - 7067
  • [7] Multi-layer adversarial domain adaptation with feature joint distribution constraint
    Fang, Yuchun
    Xiao, Zhengye
    Zhang, Wei
    NEUROCOMPUTING, 2021, 463 : 298 - 308
  • [8] Underwater vessel sound recognition based on multi-layer feature and attention mechanism
    Wei, Wei
    Li, Jing
    Han, Yucheng
    Zhang, Lili
    Cui, Ning
    Yu, Pei
    Tan, Hongxin
    Yang, Xudong
    Yang, Kang
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [9] A Visual Recognition Model Based on Hierarchical Feature Extraction and Multi-layer SNN
    Xu, Xiaoliang
    Lu, Wensi
    Fang, Qiming
    Xia, Yixing
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I, 2018, 11301 : 525 - 534
  • [10] Masking Speech Feature to Detect Adversarial Examples for Speaker Verification
    Chen, Xing
    Yao, Jiadi
    Zhang, Xiao-Lei
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 191 - 195