EdDSA Shield: Fortifying Machine Learning Against Data Poisoning Threats in Continual Learning

被引:0
|
作者
Nageswari, Akula [1 ]
Sanjeevulu, Vasundra [2 ]
机构
[1] Jawaharlal Nehru Technol Univ Ananthapur, Ananthapuramu, India
[2] JNTUA Coll Engn, Ananthapuramu, India
关键词
Continual learning; Machine learning; EdDSA; Data poisoning; Defense; CONCEPT DRIFT;
D O I
10.1007/978-981-97-8031-0_107
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning in machine learning systems requires models to adapt and evolve based on new data and experiences. However, this dynamic nature also introduces a vulnerability to data poisoning attacks, wheremaliciously crafted input can lead to misleading model updates. In this research, we propose a novel approach utilizing theEdDSAencryption system to safeguard the integrity of data streams in continual learning scenarios. By leveraging EdDSA, we establish a robust defense against data poisoning attempts, maintaining the model's trustworthiness and performance over time. Through extensive experimentation on diverse datasets and continual learning scenarios, we demonstrate the efficacy of our proposed approach. The results indicate a significant reduction in susceptibility to data poisoning attacks, even in the presence of sophisticated adversaries.
引用
收藏
页码:1018 / 1028
页数:11
相关论文
共 50 条
  • [11] Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems
    Wang, Zhibo
    Ma, Jingjing
    Wang, Xue
    Hu, Jiahui
    Qin, Zhan
    Ren, Kui
    ACM COMPUTING SURVEYS, 2023, 55 (07)
  • [12] Ethics of Adversarial Machine Learning and Data Poisoning
    Laurynas Adomaitis
    Rajvardhan Oak
    Digital Society, 2023, 2 (1):
  • [13] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [14] BrainWash: A Poisoning Attack to Forget in Continual Learning
    Abbasi, Ali
    Nooralinejad, Parsa
    Pirsiavash, Hamed
    Kolouri, Soheil
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24057 - 24067
  • [15] Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
    Cina, Antonio Emanuele
    Grosse, Kathrin
    Demontis, Ambra
    Vascon, Sebastiano
    Zellinger, Werner
    Moser, Bernhard A.
    Oprea, Alina
    Biggio, Battista
    Pelillo, Marcello
    Roli, Fabio
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [16] A Defense Method against Poisoning Attacks on IoT Machine Learning Using Poisonous Data
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 100 - 107
  • [17] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240
  • [18] Machine Unlearning by Reversing the Continual Learning
    Zhang, Yongjing
    Lu, Zhaobo
    Zhang, Feng
    Wang, Hao
    Li, Shaojing
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [19] Continual Learning for Neural Machine Translation
    Cao, Yue
    Wei, Hao-Ran
    Chen, Boxing
    Wan, Xiaojun
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3964 - 3974
  • [20] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006