A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models

被引:0
|
作者
Sarker, Ankur [1 ]
Shen, Haiying [1 ]
Sen, Tanmoy [1 ]
机构
[1] Univ Virginia, Dept Comp Sci, Charlottesville, VA 22904 USA
关键词
D O I
10.1109/SECON52354.2021.9491584
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In a connected autonomous vehicle (CAV) scenario, each vehicle utilizes an onboard deep neural network (DNN) model to understand its received time-series driving signals (e.g., speed, brake status) from its nearby vehicles, and then takes necessary actions to increase traffic safety and roadway efficiency. In the scenario, it is plausible that an attacker may launch an adversarial attack, in which the attacker adds unnoticeable perturbation to the actual driving signals to fool the DNN model inside a victim vehicle to output a misclassified class to cause traffic congestion and/or accidents. Such an attack must be generated in near real-time and the adversarial maneuver must be consistent with the current traffic context. However, previously proposed adversarial attacks fail to meet these requirements. To handle these challenges, in this paper, we propose a Context-aware Black-box Adversarial Attack (CBAA) for time-series DNN models in CAV scenarios. By analyzing real driving datasets, we observe that specific driving signals at certain time points have a higher impact on the DNN output. These influential spatiotemporal factors differ in different traffic contexts (a combination of different traffic factors (e.g., congestion, slope, and curvature)). Thus, CBAA first generates the perturbation only on the influential spatio-temporal signals for each context offline. In generating an attack online, CBAA uses the offline perturbation for the current context to start searching the minimum perturbation using the zeroth-order gradient descent method that will lead to the misclassification. Limiting the spatio-temporal searching scope with the constraint of context greatly expedites finding the final perturbation. Our extensive experimental studies using two different real driving datasets show that CBAA requires 43% fewer queries (to the DNN model to verify the attack success) and 53% less time than existing adversarial attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] An Advanced Black-Box Adversarial Attack for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    Uehara, Hua
    [J]. 2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, : 184 - 192
  • [2] Efficient Black-Box Adversarial Attacks for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    Mendelson, Quincy
    [J]. 2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 536 - 544
  • [3] A Suspicion -Free Black -box Adversarial Attack for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    [J]. 2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 786 - 796
  • [4] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [5] Targeted Black-Box Adversarial Attack Method for Image Classification Models
    Zheng, Su
    Chen, Jialin
    Wang, Lingli
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Black-Box Adversarial Attack on Time Series Classification
    Ding, Daizong
    Zhang, Mi
    Feng, Fuli
    Huang, Yuanmin
    Jiang, Erling
    Yang, Min
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7358 - 7368
  • [7] Black-Box Adversarial Sample Attack for Query-Less Text Classification Models
    Luo, Senlin
    Cheng, Yao
    Wan, Yunwei
    Pan, Limin
    Li, Xinshuai
    [J]. Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2024, 44 (12): : 1277 - 1286
  • [8] Towards Query-efficient Black-box Adversarial Attack on Text Classification Models
    Yadollahi, Mohammad Mehdi
    Lashkari, Arash Habibi
    Ghorbani, Ali A.
    [J]. 2021 18TH INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST (PST), 2021,
  • [9] Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning
    Qi, Peihan
    Jiang, Tao
    Wang, Lizhan
    Yuan, Xu
    Li, Zan
    [J]. IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) : 674 - 686
  • [10] Black-box Adversarial Machine Learning Attack on Network Traffic Classification
    Usama, Muhammad
    Qayyum, Adnan
    Qadir, Junaid
    Al-Fuqaha, Ala
    [J]. 2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 84 - 89