Efficient Black-Box Adversarial Attacks for Deep Driving Maneuver Classification Models

被引:0
|
作者
Sarker, Ankur [1 ]
Shen, Haiying [1 ]
Sen, Tanmoy [1 ]
Mendelson, Quincy [1 ]
机构
[1] Univ Virginia, Dept Comp Sci, Charlottesville, VA 22904 USA
关键词
D O I
10.1109/MASS52906.2021.00072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Network (DNN) models are expected to be widely used in self-driven autonomous vehicles to understand surrounding environments and enhance driving safety. In this paper, we propose a Fast Black-box Adversarial (FBA) attack for time-series DNN models in connected autonomous vehicle (CAV) scenarios. In this attack, an attacker sends false driving signals to a vehicle to misclassify its DNN model (e.g., maintaining speed is misclassified to stopping). Though different black-box adversarial attacks have been proposed previously, they are mainly for image classification, which cannot be directly adopted in the CAV scenarios due to two challenges. First, the attack needs to be generated in near real time. Second, it should not be noticeable based on the driving time-series signals. To handle these two challenges, FBA consists of two steps for the adversarial signal generation: offline and online. First, based on our real data analysis observation that each driving maneuver has maneuver-specific similar patterns (in the time-series) regardless of drivers or vehicles, FBA finds the influential input portion for each maneuver as the offline adversarial signal portion. Second, given a benign driving signal input, FBA replaces its influential input portion with the offline adversarial signal portion and smooths the signals, and uses this input as the initial solution to find the optimal perturbation (that leads to successful attack while minimizing the perturbation values) online using the zeroth-order gradient descent method. It significantly reduces the time to find the optimal perturbation since the initial solution is closer to the optimal solution. Our experiments based on real-driving datasets show the effectiveness of FBA in dealing with the two challenges compared with the existing black-box adversarial attacks.
引用
收藏
页码:536 / 544
页数:9
相关论文
共 50 条
  • [1] An Advanced Black-Box Adversarial Attack for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    Uehara, Hua
    [J]. 2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, : 184 - 192
  • [2] A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    [J]. 2021 18TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2021,
  • [3] A Suspicion -Free Black -box Adversarial Attack for Deep Driving Maneuver Classification Models
    Sarker, Ankur
    Shen, Haiying
    Sen, Tanmoy
    [J]. 2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 786 - 796
  • [4] A review of black-box adversarial attacks on image classification
    Zhu, Yanfei
    Zhao, Yaochi
    Hu, Zhuhua
    Luo, Tan
    He, Like
    [J]. NEUROCOMPUTING, 2024, 610
  • [5] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [6] White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
    Gil, Yotam
    Chai, Yoav
    Gorodissky, Or
    Berant, Jonathan
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 1373 - 1379
  • [7] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [8] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Black-Box Adversarial Attacks against Audio Forensics Models
    Jiang, Yi
    Ye, Dengpan
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [10] Heuristic Black-Box Adversarial Attacks on Video Recognition Models
    Wei, Zhipeng
    Chen, Jingjing
    Wei, Xingxing
    Jiang, Linxi
    Chua, Tat-Seng
    Zhou, Fengfeng
    Jiang, Yu-Gang
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12338 - 12345