Exploring the Impact of Data Poisoning Attacks on Machine Learning Model Reliability

被引:9
|
作者
Verde, Laura [1 ]
Marulli, Fiammetta [1 ]
Marrone, Stefano [1 ]
机构
[1] Univ Campania L Vanvitelli, Dept Maths & Phys, Caserta, Italy
关键词
Poisoned Big Data; Data Poisoning Attacks; Security; Reliability; Resilient Machine Learning; Disorders detection; Voice quality assessment;
D O I
10.1016/j.procs.2021.09.032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have seen the widespread adoption of Artificial Intelligence techniques in several domains, including healthcare, justice, assisted driving and Natural Language Processing (NLP) based applications (e.g., the Fake News detection). Those mentioned are just a few examples of some domains that are particularly critical and sensitive to the reliability of the adopted machine learning systems. Therefore, several Artificial Intelligence approaches were adopted as support to realize easy and reliable solutions aimed at improving the early diagnosis, personalized treatment, remote patient monitoring and better decision-making with a consequent reduction of healthcare costs. Recent studies have shown that these techniques are venerable to attacks by adversaries at phases of artificial intelligence. Poisoned data set are the most common attack to the reliability of Artificial Intelligence approaches. Noise, for example, can have a significant impact on the overall performance of a machine learning model. This study discusses the strength of impact of noise on classification algorithms. In detail, the reliability of several machine learning techniques to distinguish correctly pathological and healthy voices by analysing poisoning data was evaluated. Voice samples selected by available database, widely used in research sector, the Saarbruecken Voice Database, were processed and analysed to evaluate the resilience and classification accuracy of these techniques. All analyses are evaluated in terms of accuracy, specificity, sensitivity, F1-score and ROC area. (C) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://crativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of KES International.
引用
收藏
页码:2624 / 2632
页数:9
相关论文
共 50 条
  • [31] Ethics of Adversarial Machine Learning and Data Poisoning
    Laurynas Adomaitis
    Rajvardhan Oak
    Digital Society, 2023, 2 (1):
  • [32] Jangseung: A Guardian for Machine Learning Algorithms to Protect Against Poisoning Attacks
    Wolf, Shaya
    Gamboa, Woodrow
    Borowczak, Mike
    2021 IEEE INTERNATIONAL SMART CITIES CONFERENCE (ISC2), 2021,
  • [33] An Analytical Framework for Evaluating Successful Poisoning Attacks on Machine Learning Algorithms
    M. Surekha
    Anil Kumar Sagar
    Vineeta Khemchandani
    SN Computer Science, 6 (4)
  • [34] Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems
    Wang, Zhibo
    Ma, Jingjing
    Wang, Xue
    Hu, Jiahui
    Qin, Zhan
    Ren, Kui
    ACM COMPUTING SURVEYS, 2023, 55 (07)
  • [35] Poisoning attacks on machine learning models in cyber systems and mitigation strategies
    Izmailov, Rauf
    Venkatesan, Sridhar
    Reddy, Achyut
    Chadha, Ritu
    De Lucia, Michael
    Oprea, Alina
    DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VI, 2022, 12117
  • [36] On the Performance Impact of Poisoning Attacks on Load Forecasting in Federated Learning
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 64 - 66
  • [37] Data Poisoning Attacks on Multi-Task Relationship Learning
    Zhao, Mengchen
    An, Bo
    Yu, Yaodong
    Liu, Sulin
    Pan, Sinno Jialin
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2628 - 2635
  • [38] Data Poisoning Attacks to Deep Learning Based Recommender Systems
    Huang, Hai
    Mu, Jiaming
    Gong, Neil Zhenqiang
    Li, Qi
    Liu, Bin
    Xu, Mingwei
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [39] Adversarial data poisoning attacks against the PC learning algorithm
    Alsuwat, Emad
    Alsuwat, Hatim
    Valtorta, Marco
    Farkas, Csilla
    INTERNATIONAL JOURNAL OF GENERAL SYSTEMS, 2020, 49 (01) : 3 - 31
  • [40] Poisoning Attacks on Data-Driven Utility Learning in Games
    Jia, Ruoxi
    Konstantakopoulos, Ioannis C.
    Li, Bo
    Spanos, Costas
    2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 5774 - 5780