EdDSA Shield: Fortifying Machine Learning Against Data Poisoning Threats in Continual Learning

被引:0
|
作者
Nageswari, Akula [1 ]
Sanjeevulu, Vasundra [2 ]
机构
[1] Jawaharlal Nehru Technol Univ Ananthapur, Ananthapuramu, India
[2] JNTUA Coll Engn, Ananthapuramu, India
关键词
Continual learning; Machine learning; EdDSA; Data poisoning; Defense; CONCEPT DRIFT;
D O I
10.1007/978-981-97-8031-0_107
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning in machine learning systems requires models to adapt and evolve based on new data and experiences. However, this dynamic nature also introduces a vulnerability to data poisoning attacks, wheremaliciously crafted input can lead to misleading model updates. In this research, we propose a novel approach utilizing theEdDSAencryption system to safeguard the integrity of data streams in continual learning scenarios. By leveraging EdDSA, we establish a robust defense against data poisoning attempts, maintaining the model's trustworthiness and performance over time. Through extensive experimentation on diverse datasets and continual learning scenarios, we demonstrate the efficacy of our proposed approach. The results indicate a significant reduction in susceptibility to data poisoning attacks, even in the presence of sophisticated adversaries.
引用
收藏
页码:1018 / 1028
页数:11
相关论文
共 50 条
  • [1] Targeted Data Poisoning Attacks Against Continual Learning Neural Networks
    Li, Huayu
    Ditzler, Gregory
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [2] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [3] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [4] DATA POISONING ATTACK AIMING THE VULNERABILITY OF CONTINUAL LEARNING
    Han, Gyojin
    Choi, Jaehyun
    Hong, Hyeong Gwon
    Kim, Junmo
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1905 - 1909
  • [5] Machine Learning Security Against Data Poisoning: Are We There Yet?
    Cina, Antonio Emanuele
    Grosse, Kathrin
    Demontis, Ambra
    Biggio, Battista
    Roli, Fabio
    Pelillo, Marcello
    COMPUTER, 2024, 57 (03) : 26 - 34
  • [6] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    COMPUTER, 2022, 55 (11) : 94 - 99
  • [7] Deep behavioral analysis of machine learning algorithms against data poisoning
    Paracha, Anum
    Arshad, Junaid
    Ben Farah, Mohamed
    Ismail, Khalid
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 24 (01)
  • [8] A Flexible Poisoning Attack Against Machine Learning
    Jiang, Wenbo
    Li, Hongwei
    Liu, Sen
    Ren, Yanzhi
    He, Miao
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [9] SHIELD - Secure Aggregation Against Poisoning in Hierarchical Federated Learning
    Siriwardhana, Yushan
    Porambage, Pawani
    Liyanage, Madhusanka
    Marchal, Samuel
    Ylianttila, Mika
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1845 - 1863
  • [10] Clinical applications of continual learning machine learning
    Lee, Cecilia S.
    Lee, Aaron Y.
    LANCET DIGITAL HEALTH, 2020, 2 (06): : E279 - E281