Imitation Attacks and Defenses for Black-box Machine Translation Systems

被引:0
|
作者
Wallace, Eric [1 ]
Stern, Mitchell [1 ]
Song, Dawn [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversaries may look to steal or attack black-box NLP systems, either for financial gain or to exploit model errors. One setting of particular interest is machine translation (MT), where models have high commercial value and errors can be costly. We investigate possible exploitations of black-box MT systems and explore a preliminary defense against such threats. We first show that MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs. Using simulated experiments, we demonstrate that MT model stealing is possible even when imitation models have different input data or architectures than their target models. Applying these ideas, we train imitation models that reach within 0.6 BLEU of three production MT systems on both high-resource and low-resource language pairs. We then leverage the similarity of our imitation models to transfer adversarial examples to the production systems. We use gradient-based attacks that expose inputs which lead to semanticallyincorrect translations, dropped content, and vulgar model outputs. To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models. This defense degrades the adversary's BLEU score and attack success rate at some cost in the defender's BLEU and inference speed.
引用
收藏
页码:5531 / 5546
页数:16
相关论文
共 50 条
  • [21] Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
    Alatwi, Huda Ali
    Aldweesh, Amjad
    2021 IEEE WORLD AI IOT CONGRESS (AIIOT), 2021, : 34 - 40
  • [22] B3: Backdoor Attacks against Black-box Machine Learning Models
    Gong, Xueluan
    Chen, Yanjiao
    Yang, Wenbin
    Huang, Huayang
    Wang, Qian
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [23] Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML
    Alahmed, Shahad
    Alasad, Qutaiba
    Hammood, Maytham M.
    Yuan, Jiann-Shiun
    Alawad, Mohammed
    COMPUTERS, 2022, 11 (07)
  • [24] Nonlinear systems black-box modeling based on support vector machine
    Zhang Hao-ran
    Wang Jin
    Wang Xiao-dong
    Zhang Chang-jiang
    PROCEEDINGS OF 2005 CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1 AND 2, 2005, : 575 - 578
  • [25] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [26] Black-box Adversarial Attacks in Autonomous Vehicle Technology
    Kumar, K. Naveen
    Vishnu, C.
    Mitra, Reshmi
    Mohan, C. Krishna
    2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,
  • [27] AdvMind: Inferring Adversary Intent of Black-Box Attacks
    Pang, Ren
    Zhang, Xinyang
    Ji, Shouling
    Luo, Xiapu
    Wang, Ting
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1899 - 1907
  • [28] GeoDA: a geometric framework for black-box adversarial attacks
    Rahmati, Ali
    Moosavi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    Dai, Huaiyu
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 8443 - 8452
  • [29] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    INFORMATION SCIENCES, 2021, 550 : 285 - 296
  • [30] Physical Black-Box Adversarial Attacks Through Transformations
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    Lu, Rongxing
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) : 964 - 974