Universal Adversarial Attacks for Visual Odometry Systems

被引:1
|
作者
Xie, Xijin [1 ]
Liao, Longlong [1 ]
Yu, Yuanlong [1 ]
Guo, Di [2 ]
Liu, Huaping [3 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
[2] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing, Peoples R China
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICDL55364.2023.10364418
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Visual Odometry (VO) has gained significant attention as a critical technology with broad applications in autonomous navigation, augmented reality, and related domains. However, recent research indicates that VO systems are susceptible to adversarial attacks, leading to compromised accuracy and potential system failure. Traditional adversarial attack algorithms, primarily relying on random perturbations or objective function minimization, are not suitable for VO algorithms. In this paper, we present a novel and general adversarial attack algorithm specifically designed for targeting the yaw and translation components of visual odometry, while increasing the Euclidean distance between adjacent frames. Through a comprehensive analysis of VO algorithm characteristics, we propose an effective approach to disrupt VO system operation. Extensive experimental results demonstrate that the proposed attack algorithm significantly reduces the localization accuracy of VO algorithms while exhibiting robustness and generality. The findings of this research contribute to enhancing the security and stability of deep learning-based visual odometry algorithms, providing valuable insights and guidance for practical applications.
引用
收藏
页码:288 / 293
页数:6
相关论文
共 50 条
  • [31] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [32] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    European Signal Processing Conference, 2022, 2022-August : 284 - 288
  • [33] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 284 - 288
  • [34] Adversarial Attacks on License Plate Recognition Systems
    Gu, Zhaoquan
    Su, Yu
    Liu, Chenwei
    Lyu, Yinyu
    Jian, Yunxiang
    Li, Hao
    Cao, Zhen
    Wang, Le
    CMC-COMPUTERS MATERIALS & CONTINUA, 2020, 65 (02): : 1437 - 1452
  • [35] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [36] Generative Adversarial Attacks on Fingerprint Recognition Systems
    Kwon, Hee Won
    Nam, Jea-Won
    Kim, Joongheon
    Lee, Youn Kyu
    35TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2021), 2021, : 483 - 485
  • [37] Adversarial Attacks on Adaptive Cruise Control Systems
    Guo, Yanan
    Sato, Takami
    Cao, Yulong
    Chen, Qi Alfred
    Cheng, Yueqiang
    2023 CYBER-PHYSICAL SYSTEMS AND INTERNET-OF-THINGS WEEK, CPS-IOT WEEK WORKSHOPS, 2023, : 49 - 54
  • [38] Adversarial Attacks Against Binary Similarity Systems
    Capozzi, Gianluca
    D'elia, Daniele Cono
    Di Luna, Giuseppe Antonio
    Querzoni, Leonardo
    IEEE ACCESS, 2024, 12 : 161247 - 161269
  • [39] Adversarial Attacks Against IoT Identification Systems
    Kotak, Jaidip
    Elovici, Yuval
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (09) : 7868 - 7883
  • [40] Observability of linear systems under adversarial attacks
    Chong, Michelle S.
    Wakaiki, Masashi
    Hespanha, Joao P.
    2015 AMERICAN CONTROL CONFERENCE (ACC), 2015, : 2439 - 2444