UNIVERSAL ADVERSARIAL ATTACK AGAINST SPEAKER RECOGNITION MODELS

被引:0
|
作者
Hanina, Shoham [1 ]
Zolfi, Alon [1 ]
Elovici, Yuval [1 ]
Shabtai, Asaf [1 ]
机构
[1] Ben Gurion Univ Negev, Negev, Israel
关键词
Speaker Recognition; Adversarial Attack;
D O I
10.1109/ICASSP48485.2024.10447073
中图分类号
学科分类号
摘要
In recent years, deep learning-based speaker recognition (SR) models have received a large amount of attention from the machine learning (ML) community. Their increasing popularity derives in large part from their effectiveness in identifying speakers in many security-sensitive applications. Researchers have attempted to challenge the robustness of SR models, and they have revealed the models' vulnerability to adversarial ML attacks. However, the studies performed mainly proposed tailor-made perturbations that are only effective for the speakers they were trained on (i.e., a closed-set). In this paper, we propose the Anonymous Speakers attack, a universal adversarial perturbation that fools SR models on all speakers in an open-set environment, i.e., including speakers that were not part of the training phase of the attack. Using a custom optimization process, we craft a single perturbation that can be applied to the original recording of any speaker and results in misclassification by the SR model. We examined the attack's effectiveness on various state-of-the-art SR models with a wide range of speaker identities. The results of our experiments show that our attack largely reduces the embeddings' similarity to the speaker's original embedding representation while maintaining a high signal-to-noise ratio value.
引用
收藏
页码:4860 / 4864
页数:5
相关论文
共 50 条
  • [21] A robust adversarial attack against speech recognition with UAP
    Qin, Ziheng
    Zhang, Xianglong
    Li, Shujun
    HIGH-CONFIDENCE COMPUTING, 2023, 3 (01):
  • [22] AVA: Adversarial Vignetting Attack against Visual Recognition
    Tian, Binyu
    Juefei-Xu, Felix
    Guo, Qing
    Xie, Xiaofei
    Li, Xiaohong
    Liu, Yang
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1046 - 1053
  • [23] Adaptive Adversarial Patch Attack on Face Recognition Models
    Yan, Bei
    Zhang, Jie
    Yuan, Zheng
    Shan, Shiguang
    2023 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS, IJCB, 2023,
  • [24] PhoneyTalker: An Out-of-the-Box Toolkit for Adversarial Example Attack on Speaker Recognition
    Chen, Meng
    Lu, Li
    Ba, Zhongjie
    Ren, Kui
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022), 2022, : 1419 - 1428
  • [25] Universal Adversarial Spoofing Attacks against Face Recognition
    Amada, Takuma
    Liew, Seng Pei
    Kakizaki, Kazuya
    Araki, Toshinori
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [26] AFPM: A Low-Cost and Universal Adversarial Defense for Speaker Recognition Systems
    Sun, Zongkun
    Ren, Yanzhen
    Huang, Yihuan
    Liu, Wuyang
    Zhu, Hongcheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2273 - 2287
  • [27] A Highly Stealthy Adaptive Decay Attack Against Speaker Recognition
    Zhang, Xinyu
    Xu, Yang
    Zhang, Sicong
    Li, Xiaojian
    IEEE ACCESS, 2022, 10 : 118789 - 118805
  • [28] A Survey on Universal Adversarial Attack
    Zhang, Chaoning
    Benz, Philipp
    Lin, Chenguo
    Karjauv, Adil
    Wu, Jing
    Kweon, In So
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4687 - 4694
  • [29] Black-Box Universal Adversarial Attack for DNN-Based Models of SAR Automatic Target Recognition
    Wan, Xuanshen
    Liu, Wei
    Niu, Chaoyang
    Lu, Wanjie
    Du, Meng
    Li, Yuanli
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 8673 - 8696
  • [30] Towards Transferable Adversarial Attack Against Deep Face Recognition
    Zhong, Yaoyao
    Deng, Weihong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1452 - 1466