On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems

被引:18
|
作者
Tyukin, Ivan Y. [1 ,2 ,3 ]
Higham, Desmond J. [4 ]
Gorban, Alexander N. [1 ,5 ]
机构
[1] Univ Leicester, Sch Math & Actuarial Sci, Leicester LE1 7RH, Leics, England
[2] Norwegian Univ Sci & Technol, Trondheim, Norway
[3] St Petersburg State Electrotech Univ, St Petersburg, Russia
[4] Univ Edinburgh, Sch Math, Edinburgh EH9 3FD, Midlothian, Scotland
[5] Lobachevsky Univ, Nizhnii Novgorod, Russia
基金
英国工程与自然科学研究理事会;
关键词
Adversarial examples; adversarial attacks; stochastic separation theorems; artificial intelligence; machine learning; NEURAL-NETWORKS;
D O I
10.1109/ijcnn48605.2020.9207472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we present a formal theoretical framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems. Our results apply to general multi-class classifiers that map from an input space into a decision space, including artificial neural networks used in deep learning applications. Two classes of attacks are considered. The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification. The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself. Here the perturbed system produces whatever output is desired by the attacker on a specific small data set, perhaps even a single input, but performs as normal on a validation set (which is unknown to the attacker). We show that in both cases, i.e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility. For attacks based on adversarial examples, a second crucial parameter is the absence of local concentrations in the data probability distribution, a property known as Smeared Absolute Continuity. According to our findings, robustness to adversarial examples requires either (a) the data distributions in the AI's feature space to have concentrated probability density functions or (b) the dimensionality of the AI's decision variables to be sufficiently small. We also show how to construct stealth attacks on high-dimensional AI systems that are hard to spot unless the validation set is made exponentially large.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Adversarial attacks and defenses in explainable artificial intelligence: A survey
    Baniecki, Hubert
    Biecek, Przemyslaw
    [J]. INFORMATION FUSION, 2024, 107
  • [2] Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology
    Zbrzezny, Agnieszka M.
    Grzybowski, Andrzej E.
    [J]. JOURNAL OF CLINICAL MEDICINE, 2023, 12 (09)
  • [3] Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems
    Anastasiou, Theodora
    Karagiorgou, Sophia
    Petrou, Petros
    Papamartzivanos, Dimitrios
    Giannetsos, Thanassis
    Tsirigotaki, Georgia
    Keizer, Jelle
    [J]. SENSORS, 2022, 22 (18)
  • [4] DeSVig: Decentralized Swift Vigilance Against Adversarial Attacks in Industrial Artificial Intelligence Systems
    Li, Gaolei
    Ota, Kaoru
    Dong, Mianxiong
    Wu, Jun
    Li, Jianhua
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (05) : 3267 - 3277
  • [5] Adversarial examples: attacks and defences on medical deep learning systems
    Murali Krishna Puttagunta
    S. Ravi
    C Nelson Kennedy Babu
    [J]. Multimedia Tools and Applications, 2023, 82 : 33773 - 33809
  • [6] Adversarial examples: attacks and defences on medical deep learning systems
    Puttagunta, Murali Krishna
    Ravi, S.
    Babu, C. Nelson Kennedy
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) : 33773 - 33809
  • [7] Attacks on Artificial Intelligence
    Bertino, Elisa
    [J]. IEEE SECURITY & PRIVACY, 2021, 19 (01) : 103 - +
  • [8] Boosting Model Inversion Attacks With Adversarial Examples
    Zhou, Shuai
    Zhu, Tianqing
    Ye, Dayong
    Yu, Xin
    Zhou, Wanlei
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1451 - 1468
  • [9] Adversarial examples: attacks and defenses in the physical world
    Ren, Huali
    Huang, Teng
    Yan, Hongyang
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (11) : 3325 - 3336
  • [10] COUNTERING ADVERSARIAL EXAMPLES BY MEANS OF STEGANOGRAPHIC ATTACKS
    Colangelo, Federico
    Neri, Alessandro
    Battisti, Federica
    [J]. 2019 8TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP 2019), 2019, : 193 - 198