On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems

被引:18
|
作者
Tyukin, Ivan Y. [1 ,2 ,3 ]
Higham, Desmond J. [4 ]
Gorban, Alexander N. [1 ,5 ]
机构
[1] Univ Leicester, Sch Math & Actuarial Sci, Leicester LE1 7RH, Leics, England
[2] Norwegian Univ Sci & Technol, Trondheim, Norway
[3] St Petersburg State Electrotech Univ, St Petersburg, Russia
[4] Univ Edinburgh, Sch Math, Edinburgh EH9 3FD, Midlothian, Scotland
[5] Lobachevsky Univ, Nizhnii Novgorod, Russia
基金
英国工程与自然科学研究理事会;
关键词
Adversarial examples; adversarial attacks; stochastic separation theorems; artificial intelligence; machine learning; NEURAL-NETWORKS;
D O I
10.1109/ijcnn48605.2020.9207472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we present a formal theoretical framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems. Our results apply to general multi-class classifiers that map from an input space into a decision space, including artificial neural networks used in deep learning applications. Two classes of attacks are considered. The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification. The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself. Here the perturbed system produces whatever output is desired by the attacker on a specific small data set, perhaps even a single input, but performs as normal on a validation set (which is unknown to the attacker). We show that in both cases, i.e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility. For attacks based on adversarial examples, a second crucial parameter is the absence of local concentrations in the data probability distribution, a property known as Smeared Absolute Continuity. According to our findings, robustness to adversarial examples requires either (a) the data distributions in the AI's feature space to have concentrated probability density functions or (b) the dimensionality of the AI's decision variables to be sufficiently small. We also show how to construct stealth attacks on high-dimensional AI systems that are hard to spot unless the validation set is made exponentially large.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model
    Kong, Wei
    [J]. 2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD, 2023, : 13 - 20
  • [22] The double-edged sword of AI: Ethical Adversarial Attacks to counter artificial intelligence for crime
    Michał Choraś
    Michał Woźniak
    [J]. AI and Ethics, 2022, 2 (4): : 631 - 634
  • [23] Spot evasion attacks: Adversarial examples for license plate recognition systems with convolutional neural networks
    Qian, Yaguan
    Ma, Danfeng
    Wang, Bin
    Pan, Jun
    Wang, Jiamin
    Gu, Zhaoquan
    Chen, Jianhai
    Zhou, Wujie
    Lei, Jingsheng
    [J]. COMPUTERS & SECURITY, 2020, 95 (95)
  • [24] Adversarial learning in quantum artificial intelligence
    Shen Pei-Xin
    Jiang Wen-Jie
    Li Wei-Kang
    Lu Zhi-De
    Deng Dong-Ling
    [J]. ACTA PHYSICA SINICA, 2021, 70 (14)
  • [25] Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task
    Qiu, Jing
    Du, Lei
    Chen, Yuanyuan
    Tian, Zhihong
    Du, Xiaojiang
    Guizani, Mohsen
    [J]. IEEE VEHICULAR TECHNOLOGY MAGAZINE, 2020, 15 (03): : 95 - 100
  • [26] Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
    Carlini, Nicholas
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 1 - 7
  • [27] Generative artificial intelligence and adversarial network for fraud detections in current evolutional systems
    Selvarajan, Shitharth
    Manoharan, Hariprasath
    Khadidos, Adil O.
    Khadidos, Alaa O.
    Shankar, Achyut
    Maple, Carsten
    Singh, Suresh
    [J]. EXPERT SYSTEMS, 2024,
  • [28] Adversarial Attacks on Speech Separation Systems
    Trinh, Kendrick
    Moh, Melody
    Moh, Teng-Sheng
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 703 - 708
  • [29] Evolutionary Adversarial Attacks on Payment Systems
    Kumar, Nishant
    Vimal, Siddharth
    Kayathwal, Kanishka
    Dhama, Gaurav
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 813 - 818
  • [30] A Survey on Adversarial Attack in the Age of Artificial Intelligence
    Kong, Zixiao
    Xue, Jingfeng
    Wang, Yong
    Huang, Lu
    Niu, Zequn
    Li, Feng
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021