Generating Transferable Adversarial Examples for Speech Classification

被引:7
|
作者
Kim, Hoki [1 ]
Park, Jinseong [1 ]
Lee, Jaewook [1 ]
机构
[1] Seoul Natl Univ, Gwanakro 1, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Speech classification; Adversarial attack; Transferability;
D O I
10.1016/j.patcog.2022.109286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the success of deep neural networks, the existence of adversarial attacks has revealed the vul-nerability of neural networks in terms of security. Adversarial attacks add subtle noise to the original example, resulting in a false prediction. Although adversarial attacks have been mainly studied in the im-age domain, a recent line of research has discovered that speech classification systems are also exposed to adversarial attacks. By adding inaudible noise, an adversary can deceive speech classification systems and cause fatal issues in various applications, such as speaker identification and command recognition tasks. However, research on the transferability of audio adversarial examples is still limited. Thus, in this study, we first investigate the transferability of audio adversarial examples with different structures and conditions. Through extensive experiments, we discover that the transferability of audio adversarial ex-amples is related to their noise sensitivity. Based on the analyses, we present a new adversarial attack called noise injected attack that generates highly transferable audio adversarial examples by injecting ad-ditive noise during the gradient ascent process. Our experimental results demonstrate that the proposed method outperforms other adversarial attacks in terms of transferability.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Dynamic loss yielding more transferable targeted adversarial examples
    Zhang, Ming
    Chen, Yongkang
    Li, Hu
    Qian, Cheng
    Kuang, Xiaohui
    NEUROCOMPUTING, 2024, 590
  • [32] Feature Space Perturbations Yield More Transferable Adversarial Examples
    Inkawhich, Nathan
    Wen, Wei
    Li, Hai
    Chen, Yiran
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7059 - 7067
  • [33] Generating adversarial examples with collaborative generative models
    Xu, Lei
    Zhai, Junhai
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (02) : 1077 - 1091
  • [34] An efficient framework for generating robust adversarial examples
    Zhang, Lili
    Wang, Xiaoping
    Lu, Kai
    Peng, Shaoliang
    Wang, Xiaodong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2020, 35 (09) : 1433 - 1449
  • [35] Generating adversarial examples with collaborative generative models
    Lei Xu
    Junhai Zhai
    International Journal of Information Security, 2024, 23 : 1077 - 1091
  • [36] Generating adversarial examples with input significance indicator
    Qiu, Xiaofeng
    Zhou, Shuya
    NEUROCOMPUTING, 2020, 394 : 1 - 12
  • [37] Generating Fluent Adversarial Examples for Natural Languages
    Zhang, Huangzhao
    Zhou, Hao
    Miao, Ning
    Li, Lei
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5564 - 5569
  • [38] Generating Adversarial Examples by Adversarial Networks for Semi-supervised Learning
    Ma, Yun
    Mao, Xudong
    Chen, Yangbin
    Li, Qing
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2019, 2019, 11881 : 115 - 129
  • [39] Poster: Adversarial Examples for Hate Speech Classifiers
    Oak, Rajvardhan
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2621 - 2623
  • [40] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049