Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization

被引:2
|
作者
Zhang, Qifan [1 ]
Shen, Junjie [1 ]
Tan, Mingtian [2 ]
Zhou, Zhe [2 ]
Li, Zhou [1 ]
Chen, Qi Alfred [1 ]
Zhang, Haipeng [3 ]
机构
[1] Univ Calif Irvine, Irvine, CA 92717 USA
[2] Fudan Univ, Shanghai, Peoples R China
[3] ShanghaiTech Univ, Shanghai, Peoples R China
关键词
autonomous driving; localization; model extraction; KALMAN FILTER; IDENTIFICATION;
D O I
10.1145/3564625.3567977
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The security of the Autonomous Driving (AD) system has been gaining researchers' and public's attention recently. Given that AD companies have invested a huge amount of resources in developing their AD models, e.g., localization models, these models, especially their parameters, are important intellectual property and deserve strong protection. In this work, we examine whether the confidentiality of production-grade Multi-Sensor Fusion (MSF) models, in particular, Error-State Kalman Filter (ESKF), can be stolen from an outside adversary. We propose a new model extraction attack called TaskMaster that can infer the secret ESKF parameters under black-box assumption. In essence, TaskMaster trains a substitutional ESKF model to recover the parameters, by observing the input and output to the targeted AD system. To precisely recover the parameters, we combine a set of techniques, like gradient-based optimization, search-space reduction and multi-stage optimization. The evaluation result on real-world vehicle sensor dataset shows that TaskMaster is practical. For example, with 25 seconds AD sensor data for training, the substitutional ESKF model reaches centimeter-level accuracy, comparing with the ground-truth model.
引用
收藏
页码:56 / 70
页数:15
相关论文
共 35 条
  • [21] Optimal attack strategy selection of an autonomous cyber-physical micro-grid based on attack-defense game model
    Ji, Xiao-Peng
    Tian, Wen
    Liu, Weiwei
    Liu, Guangjie
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (09) : 8859 - 8866
  • [22] Security analysis and adaptive false data injection against multi-sensor fusion localization for autonomous driving
    Hu, Linqing
    Zhang, Junqi
    Zhang, Jie
    Cheng, Shaoyin
    Wang, Yuyi
    Zhang, Weiming
    Yu, Nenghai
    INFORMATION FUSION, 2025, 117
  • [23] A Multi-stage Game Model for the False Data Injection Attack Against Power Systems
    Wang, Qi
    Cai, Xingpu
    Tai, Wei
    Tang, Yi
    2018 IEEE 8TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER), 2018, : 1450 - 1455
  • [24] A two-layer game theoretical attack-defense model for a false data injection attack against power systems
    Wang, Qi
    Tai, Wei
    Tang, Yi
    Ni, Ming
    You, Shi
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2019, 104 : 169 - 177
  • [25] Model Extraction Attack against On-device Deep Learning with Power Side Channel
    Liu, Jialin
    Wang, Han
    2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [26] Lateral-Direction Localization Attack in High-Level Autonomous Driving: Domain-Specific Defense Opportunity via Lane Detection
    Shen, Junjie
    Luo, Yunpeng
    Wan, Ziwen
    Chen, Qi Alfred
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 9707 - 9713
  • [27] Game theoretic resource allocation model for designing effective traffic safety solution against drunk driving
    Jie, Yingmo
    Liu, Charles Zhechao
    Li, Mingchu
    Choo, Kim-Kwang Raymond
    Chen, Ling
    Guo, Cheng
    APPLIED MATHEMATICS AND COMPUTATION, 2020, 376
  • [28] MEGEX: Data-Free Model Extraction Attack Against Gradient-Based Explainable AI
    Miura, Takayuki
    Shibahara, Toshiki
    Yanai, Naoto
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON SECURE AND TRUSTWORTHY DEEP LEARNING SYSTEMS, SECTL 2024, 2024, : 56 - 66
  • [29] Model-Extraction Attack against FPGA-DNN Accelerator Utilizing Correlation Electromagnetic Analysis
    Yoshida, Kota
    Kubota, Takaya
    Shiozaki, Mitsuru
    Fujino, Takeshi
    2019 27TH IEEE ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM), 2019, : 318 - 318
  • [30] Monitoring-Based Differential Privacy Mechanism Against Query Flooding-Based Model Extraction Attack
    Yan, Haonan
    Li, Xiaoguang
    Li, Hui
    Li, Jiamin
    Sun, Wenhai
    Li, Fenghua
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (04) : 2680 - 2694