Generating Transferable Adversarial Examples for Speech Classification

被引:7
|
作者
Kim, Hoki [1 ]
Park, Jinseong [1 ]
Lee, Jaewook [1 ]
机构
[1] Seoul Natl Univ, Gwanakro 1, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Speech classification; Adversarial attack; Transferability;
D O I
10.1016/j.patcog.2022.109286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the success of deep neural networks, the existence of adversarial attacks has revealed the vul-nerability of neural networks in terms of security. Adversarial attacks add subtle noise to the original example, resulting in a false prediction. Although adversarial attacks have been mainly studied in the im-age domain, a recent line of research has discovered that speech classification systems are also exposed to adversarial attacks. By adding inaudible noise, an adversary can deceive speech classification systems and cause fatal issues in various applications, such as speaker identification and command recognition tasks. However, research on the transferability of audio adversarial examples is still limited. Thus, in this study, we first investigate the transferability of audio adversarial examples with different structures and conditions. Through extensive experiments, we discover that the transferability of audio adversarial ex-amples is related to their noise sensitivity. Based on the analyses, we present a new adversarial attack called noise injected attack that generates highly transferable audio adversarial examples by injecting ad-ditive noise during the gradient ascent process. Our experimental results demonstrate that the proposed method outperforms other adversarial attacks in terms of transferability.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Crafting transferable adversarial examples via contaminating the salient feature variance
    Ren, Yuchen
    Zhu, Hegui
    Sui, Xiaoyan
    Liu, Chong
    INFORMATION SCIENCES, 2023, 644
  • [42] Rethinking the optimization objective for transferable adversarial examples from a fuzzy perspective
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Zhao, Peng
    Neural Networks, 2025, 184
  • [43] UCG: A Universal Cross-Domain Generator for Transferable Adversarial Examples
    Li, Zhankai
    Wang, Weiping
    Li, Jie
    Chen, Kai
    Zhang, Shigeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3023 - 3037
  • [44] Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4307 - 4316
  • [45] GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C.
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3110 - 3114
  • [46] Timing Attack on Random Forests for Generating Adversarial Examples
    Dan, Yuichiro
    Shibahara, Toshiki
    Takahashi, Junko
    ADVANCES IN INFORMATION AND COMPUTER SECURITY (IWSEC 2020), 2020, 12231 : 285 - 302
  • [47] Generating adversarial examples for DNN using pooling layers
    Zhang, Yueling
    Pu, Geguang
    Zhang, Min
    Yang, William
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (04) : 4615 - 4620
  • [48] On the Strengths of Pure Evolutionary Algorithms in Generating Adversarial Examples
    Bartlett, Antony
    Liem, Cynthia C. S.
    Panichella, Annibale
    2023 IEEE/ACM INTERNATIONAL WORKSHOP ON SEARCH-BASED AND FUZZ TESTING, SBFT, 2023, : 1 - 8
  • [49] Generating unrestricted adversarial examples via three parameteres
    Hanieh Naderi
    Leili Goli
    Shohreh Kasaei
    Multimedia Tools and Applications, 2022, 81 : 21919 - 21938
  • [50] Generating unrestricted adversarial examples via three parameteres
    Naderi, Hanieh
    Goli, Leili
    Kasaei, Shohreh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (15) : 21919 - 21938