Adversarial Poisoning Attacks on Federated Learning in Metaverse

被引:1
|
作者
Aristodemou, Marios [1 ]
Liu, Xiaolan [2 ]
Lambotharan, Sangarapillai [1 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough, Leics, England
[2] Loughborough Univ, Inst Digital Technol, London Campus, London, England
关键词
Adversarial Machine learning; Federated learning; Bayesian Optimisation; Metaverse; Poisoning Attacks;
D O I
10.1109/ICC45041.2023.10279748
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Metaverse is envisioned to be a human-centric framework, and provide a new concept of living by offering comprehensively immersive experience for users in education, medicine and entertainment domain. Since a large amount of private data is generated at each user for accessing Metaverse, the emerging federated learning (FL) provides an effective solution to address the potential privacy leakage of data sharing by adopting the mechanism of local training and global model aggregation. However, the model aggregation is susceptible to adversarial poisoning attacks. This imposes critical issues for the privacy-preserving mechanism in Metaverse. In this research, we develop two poisoning attacks in order to emulate the behaviour of adversaries possibly existing in practical Metaverse scenarios. First, we develop a data poisoning attack using Bayesian optimisation to search for the optimal parameters of generating adversarial examples to conduct reversed adversarial training. Second, we develop a model poisoning attack where we apply layer optimisation using Bayesian optimisation to search the optimal weights for the convolutional layer in order to induce uncertainty in the classification. Numerical results show that both attack schemes can cause attacks that can not be recognised by the FL server, and layer optimisation is a stronger poisoning attack.
引用
收藏
页码:6312 / 6317
页数:6
相关论文
共 50 条
  • [1] Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
    Zhao, Ying
    Chen, Junjun
    Zhang, Jiale
    Wu, Di
    Blumenstein, Michael
    Yu, Shui
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (07):
  • [2] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    [J]. IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [3] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    [J]. 2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [4] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    [J]. INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [5] The Impact of Adversarial Attacks on Federated Learning: A Survey
    Kumar, Kummari Naveen
    Mohan, Chalavadi Krishna
    Cenkeramaddi, Linga Reddy
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2672 - 2691
  • [6] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [7] Fair Detection of Poisoning Attacks in Federated Learning
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Sanchez, David
    Rebollo-Monedero, David
    [J]. 2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 224 - 229
  • [8] Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks
    Wang, Wentao
    Xu, Han
    Wan, Yuxuan
    Ren, Jie
    Tang, Jiliang
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4830 - 4831
  • [9] Exploring Adversarial Attacks in Federated Learning for Medical Imaging
    Darzi, Erfan
    Dubost, Florian
    Sijtsema, Nanna. M.
    van Ooijen, P. M. A.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, : 13591 - 13599
  • [10] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606