UNIVERSAL ADVERSARIAL ATTACK AGAINST SPEAKER RECOGNITION MODELS

被引:0
|
作者
Hanina, Shoham [1 ]
Zolfi, Alon [1 ]
Elovici, Yuval [1 ]
Shabtai, Asaf [1 ]
机构
[1] Ben Gurion Univ Negev, Negev, Israel
关键词
Speaker Recognition; Adversarial Attack;
D O I
10.1109/ICASSP48485.2024.10447073
中图分类号
学科分类号
摘要
In recent years, deep learning-based speaker recognition (SR) models have received a large amount of attention from the machine learning (ML) community. Their increasing popularity derives in large part from their effectiveness in identifying speakers in many security-sensitive applications. Researchers have attempted to challenge the robustness of SR models, and they have revealed the models' vulnerability to adversarial ML attacks. However, the studies performed mainly proposed tailor-made perturbations that are only effective for the speakers they were trained on (i.e., a closed-set). In this paper, we propose the Anonymous Speakers attack, a universal adversarial perturbation that fools SR models on all speakers in an open-set environment, i.e., including speakers that were not part of the training phase of the attack. Using a custom optimization process, we craft a single perturbation that can be applied to the original recording of any speaker and results in misclassification by the SR model. We examined the attack's effectiveness on various state-of-the-art SR models with a wide range of speaker identities. The results of our experiments show that our attack largely reduces the embeddings' similarity to the speaker's original embedding representation while maintaining a high signal-to-noise ratio value.
引用
收藏
页码:4860 / 4864
页数:5
相关论文
共 50 条
  • [1] Universal Sparse Adversarial Attack on Video Recognition Models
    Li, Haoxuan
    Wang, Zheng
    INTERNATIONAL JOURNAL OF MULTIMEDIA DATA ENGINEERING & MANAGEMENT, 2021, 12 (03):
  • [2] Transferable universal adversarial perturbations against speaker recognition systems
    Liu, Xiaochen
    Tan, Hao
    Zhang, Junjian
    Li, Aiping
    Gu, Zhaoquan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (03):
  • [3] Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Models
    Zolfi, Alon
    Avidan, Shai
    Elovici, Yuval
    Shabtai, Asaf
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III, 2023, 13715 : 304 - 320
  • [4] Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition
    Wang, Qing
    Guo, Pengcheng
    Xie, Lei
    INTERSPEECH 2020, 2020, : 4228 - 4232
  • [5] REAL-TIME, UNIVERSAL, AND ROBUST ADVERSARIAL ATTACKS AGAINST SPEAKER RECOGNITION SYSTEMS
    Xie, Yi
    Shi, Cong
    Lie, Zhuohang
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1738 - 1742
  • [6] UNIVERSAL ADVERSARIAL PERTURBATIONS GENERATIVE NETWORK FOR SPEAKER RECOGNITION
    Li, Jiguo
    Zhang, Xinfeng
    Jia, Chuanmin
    Xu, Jizheng
    Zhang, Li
    Wang, Yue
    Ma, Siwei
    Gao, Wen
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [7] Pairing Weak with Strong: Twin Models for Defending against Adversarial Attack on Speaker Verification
    Peng, Zhiyuan
    Li, Xu
    Lee, Tan
    INTERSPEECH 2021, 2021, : 4284 - 4288
  • [8] MIGAA: A Physical Adversarial Attack Method against SAR Recognition Models
    Xie, Jianyue
    Peng, Bo
    Lu, Zhengzhi
    Zhou, Jie
    Peng, Bowen
    2024 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, ICCCS 2024, 2024, : 309 - 314
  • [9] Layerwise universal adversarial attack on NLP models
    Tsymboi, Olga
    Malaev, Danil
    Petrovskii, Andrei
    Oseledets, Ivan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 129 - 143
  • [10] Real-time, Robust and Adaptive Universal Adversarial Attacks Against Speaker Recognition Systems
    Yi Xie
    Zhuohang Li
    Cong Shi
    Jian Liu
    Yingying Chen
    Bo Yuan
    Journal of Signal Processing Systems, 2021, 93 : 1187 - 1200