UNIVERSAL ADVERSARIAL ATTACK AGAINST SPEAKER RECOGNITION MODELS

被引:0
|
作者
Hanina, Shoham [1 ]
Zolfi, Alon [1 ]
Elovici, Yuval [1 ]
Shabtai, Asaf [1 ]
机构
[1] Ben Gurion Univ Negev, Negev, Israel
关键词
Speaker Recognition; Adversarial Attack;
D O I
10.1109/ICASSP48485.2024.10447073
中图分类号
学科分类号
摘要
In recent years, deep learning-based speaker recognition (SR) models have received a large amount of attention from the machine learning (ML) community. Their increasing popularity derives in large part from their effectiveness in identifying speakers in many security-sensitive applications. Researchers have attempted to challenge the robustness of SR models, and they have revealed the models' vulnerability to adversarial ML attacks. However, the studies performed mainly proposed tailor-made perturbations that are only effective for the speakers they were trained on (i.e., a closed-set). In this paper, we propose the Anonymous Speakers attack, a universal adversarial perturbation that fools SR models on all speakers in an open-set environment, i.e., including speakers that were not part of the training phase of the attack. Using a custom optimization process, we craft a single perturbation that can be applied to the original recording of any speaker and results in misclassification by the SR model. We examined the attack's effectiveness on various state-of-the-art SR models with a wide range of speaker identities. The results of our experiments show that our attack largely reduces the embeddings' similarity to the speaker's original embedding representation while maintaining a high signal-to-noise ratio value.
引用
收藏
页码:4860 / 4864
页数:5
相关论文
共 50 条
  • [41] ADVERSARIAL MANIFOLD LEARNING FOR SPEAKER RECOGNITION
    Chien, Jen-Tzung
    Peng, Kang-Ting
    2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2017, : 599 - 605
  • [42] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [43] Neural adversarial learning for speaker recognition
    Chien, Jen-Tzung
    Peng, Kang-Ting
    COMPUTER SPEECH AND LANGUAGE, 2019, 58 : 422 - 440
  • [44] Biometric template protection for speaker recognition based on universal background models
    Billeb, Stefan
    Rathgeb, Christian
    Reininger, Herbert
    Kasper, Klaus
    Busch, Christoph
    IET BIOMETRICS, 2015, 4 (02) : 116 - 126
  • [45] SiFDetectCracker: An Adversarial Attack Against Fake Voice Detection Based on Speaker-Irrelative Features
    Hai, Xuan
    Liu, Xin
    Tan, Yuan
    Zhou, Qingguo
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8552 - 8560
  • [46] Timbre-Reserved Adversarial Attack in Speaker Identification
    Wang, Qing
    Yao, Jixun
    Zhang, Li
    Guo, Pengcheng
    Xie, Lei
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3848 - 3858
  • [47] CommanderUAP: a practical and transferable universal adversarial attacks on speech recognition models
    Sun, Zheng
    Zhao, Jinxiao
    Guo, Feng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2024, 7 (01):
  • [48] Adversarial Attack against Modeling Attack on PUFs
    Wang, Sying-Jyan
    Chen, Yu-Shen
    Li, Katherine Shu-Min
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [49] Adversarial Training Time Attack Against Discriminative and Generative Convolutional Models
    Chaudhury, Subhajit
    Roy, Hiya
    Mishra, Sourav
    Yamasaki, Toshihiko
    IEEE ACCESS, 2021, 9 : 109241 - 109259
  • [50] Omni: automated ensemble with unexpected models against adversarial evasion attack
    Shu, Rui
    Xia, Tianpei
    Williams, Laurie
    Menzies, Tim
    EMPIRICAL SOFTWARE ENGINEERING, 2022, 27 (01)