Local perturbation-based black-box federated learning attack for time series classification

被引:1
|
作者
Chen, Shengbo [1 ]
Yuan, Jidong [2 ]
Wang, Zhihai [1 ]
Sun, Yongqi [1 ]
机构
[1] Minist Educ, Key Lab Big Data & Artificial Intelligence Transpo, Beijing, Peoples R China
[2] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing 100044, Peoples R China
关键词
Federated learning; Black-box backdoor attack; Local perturbation; Shapelet interval; Stealthy;
D O I
10.1016/j.future.2024.04.048
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The widespread adoption of intelligent machines and sensors has generated vast amounts of time series data, leading to the increasing use of neural networks in time series classification. Federated learning has emerged as a promising machine learning paradigm that reduces the risk of user privacy leakage. However, federated learning is vulnerable to backdoor attacks, which pose significant security threats. Furthermore, existing unrealistic white -box methods for attacking time series result in insufficient adaptation and inferior stealthiness. To overcome these limitations, this paper proposes a gradient -free black -box method called local perturbationbased backdoor F ederated L earning A ttack for T ime S eries classification (FLATS). The attack is formulated as a constrained optimization problem and is solved using a differential evolution algorithm, without requiring any knowledge of the internal architecture of the target model. In addition, the proposed method considers the time series shapelet interval as a local perturbation range and adopts a soft target poisoning approach to minimize the difference between the attacker model and the benign model. Experimental results demonstrate that our proposed method can effectively attack federated learning time series classification models with potential security issues while generating imperceptible poisoned samples that can evade various defence methods.
引用
收藏
页码:488 / 500
页数:13
相关论文
共 50 条
  • [1] Black-Box Adversarial Attack on Time Series Classification
    Ding, Daizong
    Zhang, Mi
    Feng, Fuli
    Huang, Yuanmin
    Jiang, Erling
    Yang, Min
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7358 - 7368
  • [2] TSadv: Black-box adversarial attack on time series with local perturbations
    Yang, Wenbo
    Yuan, Jidong
    Wang, Xiaokang
    Zhao, Peixiang
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 114
  • [3] TSadv: Black-box adversarial attack on time series with local perturbations
    Yang, Wenbo
    Yuan, Jidong
    Wang, Xiaokang
    Zhao, Peixiang
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 114
  • [4] Improved black-box attack based on query and perturbation distribution
    Zhao, Weiwei
    Zeng, Zhigang
    [J]. 2021 13TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2021, : 117 - 125
  • [5] Black-box Adversarial Machine Learning Attack on Network Traffic Classification
    Usama, Muhammad
    Qayyum, Adnan
    Qadir, Junaid
    Al-Fuqaha, Ala
    [J]. 2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 84 - 89
  • [6] Double Perturbation-Based Privacy-Preserving Federated Learning against Inference Attack
    Jiang, Yongqi
    Shi, Yanhang
    Chen, Siguang
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5451 - 5456
  • [7] Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
    Wang, Binghui
    Li, Youqi
    Zhou, Pan
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13369 - 13377
  • [8] Sparse Black-Box Video Attack with Reinforcement Learning
    Xingxing Wei
    Huanqian Yan
    Bo Li
    [J]. International Journal of Computer Vision, 2022, 130 : 1459 - 1473
  • [9] Sparse Black-Box Video Attack with Reinforcement Learning
    Wei, Xingxing
    Yan, Huanqian
    Li, Bo
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (06) : 1459 - 1473
  • [10] Square-Based Black-Box Adversarial Attack on Time Series Classification Using Simulated Annealing and Post-Processing-Based Defense
    Liu, Sichen
    Luo, Yuan
    [J]. ELECTRONICS, 2024, 13 (03)