Jangseung: A Guardian for Machine Learning Algorithms to Protect Against Poisoning Attacks

被引:1
|
作者
Wolf, Shaya [1 ]
Gamboa, Woodrow [2 ]
Borowczak, Mike [1 ]
机构
[1] Univ Wyoming, Comp Sci Dept, Laramie, WY 82071 USA
[2] Stanford Univ, Comp Sci Dept, Stanford, CA 94305 USA
关键词
Adversarial Perturbations; Poisoning Defense; Smart City Applications;
D O I
10.1109/ISC253183.2021.9562816
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many smart city applications rely on machine learning; however, adversarial perturbations can be injected into training data to cause models to return skewed results. Jangseung is a preprocessor limits the effects of poisoning attacks without impeding on accuracy. Jangseung was created to guard support vector machines from poisoned data by utilizing anomaly detection algorithms. The preprocessor was tested through experiments that utilized two different datasets, the MNIST dataset and the UCI breast cancer Wisconsin (diagnostic) dataset. With both datasets, two identical models were trained and then attacked using the same adversarial points, one with Jangseung protecting it and the other unguarded from attack. In all cases, the protected model out-performed the unprotected model and in the best case scenario, the Jangseung-protected model outperformed the unguarded model by 96.2%. The under-trained, undefended MNIST models had an average accuracy of 53.2%. When Jangseung was present, their identical counterparts had a drastically higher average accuracy at 91.1%. Likewise, in the UCI-Cancer dataset, attack sequences lowered the accuracy of the model to as low as 75.51%, but Jangseung-defended models performed with 88.18% accuracy or better. Jangseung was an effective defense against adversarial perturbations for SVMs using different datasets and anomaly detection algorithms.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [2] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    COMPUTER, 2022, 55 (11) : 94 - 99
  • [3] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [4] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [5] An Analytical Framework for Evaluating Successful Poisoning Attacks on Machine Learning Algorithms
    M. Surekha
    Anil Kumar Sagar
    Vineeta Khemchandani
    SN Computer Science, 6 (4)
  • [6] Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles
    Jiang, Wenbo
    Li, Hongwei
    Liu, Sen
    Luo, Xizhao
    Lu, Rongxing
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) : 4439 - 4449
  • [7] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [8] Deep behavioral analysis of machine learning algorithms against data poisoning
    Paracha, Anum
    Arshad, Junaid
    Ben Farah, Mohamed
    Ismail, Khalid
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 24 (01)
  • [9] Beta Poisoning Attacks Against Machine Learning Models: Extensions, Limitations and Defenses
    Kara, Atakan
    Koprucu, Nursena
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 178 - 187
  • [10] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375