Square-Based Black-Box Adversarial Attack on Time Series Classification Using Simulated Annealing and Post-Processing-Based Defense

被引:1
|
作者
Liu, Sichen [1 ,2 ]
Luo, Yuan [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Blockchain Adv Res Ctr, Wuxi 214104, Peoples R China
关键词
time series classification; adversarial attack; adversarial attack defense;
D O I
10.3390/electronics13030650
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While deep neural networks (DNNs) have been widely and successfully used for time series classification (TSC) over the past decade, their vulnerability to adversarial attacks has received little attention. Most existing attack methods focus on white-box setups, which are unrealistic as attackers typically only have access to the model's probability outputs. Defensive methods also have limitations, relying primarily on adversarial retraining which degrades classification accuracy and requires excessive training time. On top of that, we propose two new approaches in this paper: (1) A simulated annealing-based random search attack that finds adversarial examples without gradient estimation, searching only on the l(infinity)-norm hypersphere of allowable perturbations. (2) A post-processing defense technique that periodically reverses the trend of corresponding loss values while maintaining the overall trend, using only the classifier's confidence scores as input. Experiments applying these methods to InceptionNet models trained on the UCR dataset benchmarks demonstrate the effectiveness of the attack, achieving up to 100% success rates. The defense method provided protection against up to 91.24% of attacks while preserving prediction quality. Overall, this work addresses important gaps in adversarial TSC by introducing novel black-box attack and lightweight defense techniques.
引用
收藏
页数:13
相关论文
共 47 条
  • [31] Black-Box Buster: A Robust Zero-Shot Transfer-Based Adversarial Attack Method
    Zhang, Yuxuan
    Wang, Zhaoyang
    Zhang, Boyang
    Wen, Yu
    Meng, Dan
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 : 39 - 54
  • [32] State Graph Based Explanation Approach for Black-Box Time Series Model
    Huang, Yiran
    Li, Chaofan
    Lu, Hansen
    Riedel, Till
    Beigl, Michael
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III, 2023, 1903 : 153 - 164
  • [33] Black-box attacks and defense for DNN-based power quality classification in smart grid
    Zhang, Liangheng
    Jiang, Congmei
    Pang, Aiping
    [J]. ENERGY REPORTS, 2022, 8 : 12203 - 12214
  • [34] Multi-view Correlation based Black-box Adversarial Attack for 3D Object Detection
    Liu, Bingyu
    Guo, Yuhong
    Jiang, Jianan
    Tang, Jian
    Deng, Weihong
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1036 - 1044
  • [35] An Adversarial Attack Based on Multi-objective Optimization in the Black-Box Scenario: MOEA-APGA II
    Zhang, Chunkai
    Deng, Yepeng
    Guo, Xin
    Wang, Xuan
    Liu, Chuanyi
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 603 - 612
  • [36] FastTextDodger: Decision-Based Adversarial Attack Against Black-Box NLP Models With Extremely High Efficiency
    Hu, Xiaoxue
    Liu, Geling
    Zheng, Baolin
    Zhao, Lingchen
    Wang, Qian
    Zhang, Yufei
    Du, Minxin
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2398 - 2411
  • [37] DyAdvDefender: An instance-based online machine learning model for perturbation-trial-based black-box adversarial defense
    Li, Miles Q.
    Fung, Benjamin C. M.
    Charland, Philippe
    [J]. INFORMATION SCIENCES, 2022, 601 : 357 - 373
  • [38] ROBUST DECISION-BASED BLACK-BOX ADVERSARIAL ATTACK VIA COARSE-TO-FINE RANDOM SEARCH
    Kim, Byeong Cheon
    Yu, Youngjoon
    Ro, Yong Man
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3048 - 3052
  • [39] Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal
    Shi, Yucheng
    Han, Yahong
    Tan, Yu-an
    Kuang, Xiaohui
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [40] Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures
    Sun, Jiachen
    Cao, Yulong
    Chen, Qi Alfred
    Mao, Z. Morley
    [J]. PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 877 - 894