Translate Your Gibberish: Black-Box Adversarial Attack on Machine Translation Systems

被引:0
|
作者
A. Chertkov [1 ]
O. Tsymboi [2 ]
M. Pautov [3 ]
I. Oseledets [1 ]
机构
[1] Skolkovo Institute of Science and Technology,Institute of Numerical Mathematics
[2] Moscow Institute of Physics and Technology,undefined
[3] Russian Academy of Sciences,undefined
[4] AIRI,undefined
关键词
D O I
10.1007/s10958-024-07428-y
中图分类号
学科分类号
摘要
Neural networks are deployed widely in natural language processing tasks on the industrial scale, and perhaps most often they are used as compounds of automatic machine translation systems. In this work, we present a simple approach to fool state of the art machine translation tools in the task of translation from Russian to English and vice versa. Using a novel black-box gradient-free tensor-based optimizer, we show that many online translation tools, such as Google, DeepL, and Yandex, may both produce wrong or offensive translations for nonsensical adversarial input queries and refuse to translate seemingly benign input phrases. This vulnerability may interfere with understanding a new language and simply worsen the user’s experience while using machine translation systems, and, hence, additional improvements of these tools are required to establish better translation.
引用
收藏
页码:221 / 233
页数:12
相关论文
共 50 条
  • [1] Black-box Adversarial Machine Learning Attack on Network Traffic Classification
    Usama, Muhammad
    Qayyum, Adnan
    Qadir, Junaid
    Al-Fuqaha, Ala
    2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 84 - 89
  • [2] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [3] Improved Adversarial Attack against Black-box Machine Learning Models
    Xu, Jiahui
    Wang, Chen
    Li, Tingting
    Xiang, Fengtao
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 5907 - 5912
  • [4] Amora: Black-box Adversarial Morphing Attack
    Wang, Run
    Juefei-Xu, Felix
    Guo, Qing
    Huang, Yihao
    Xie, Xiaofei
    Ma, Lei
    Liu, Yang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1376 - 1385
  • [5] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [6] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    PATTERN RECOGNITION, 2022, 122
  • [7] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [8] Boosting Black-box Adversarial Attack with a Better Convergence
    Yin, Heng
    Wang, Jindong
    Mi, Yan
    Zhang, Xiaoning
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 1234 - 1238
  • [9] An Effective Way to Boost Black-Box Adversarial Attack
    Feng, Xinjie
    Yao, Hongxun
    Che, Wenbin
    Zhang, Shengping
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 393 - 404
  • [10] Generalizable Black-Box Adversarial Attack With Meta Learning
    Yin, Fei
    Zhang, Yong
    Wu, Baoyuan
    Feng, Yan
    Zhang, Jingyi
    Fan, Yanbo
    Yang, Yujiu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1804 - 1818