Adversarial 3D Objects Against Monocular Depth Estimators

被引:0
|
作者
Feher, Tamas Mark [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Dept Control Engn & Informat Technol, Budapest, Hungary
来源
PROCEEDINGS OF THE 2024 9TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2024 | 2024年
关键词
Monocular depth estimation; Adversarial attack; Transformer; Convolutional neural network; Implicit surfaces; ATTACKS;
D O I
10.1145/3674029.3674052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The state-of-the-art solutions for the reconstruction of the depth from 2D images (monocular depth estimation) are deep neural networks. Although the neural networks in the field of computer vision generally provide good performance, they are quite sensitive to specially crafted inputs, called adversarial examples. It is possible to create these adversarial examples solely by manipulating the physical world with special lighting, painted textures, or 3D shapes. Unlike most of the existing literature in the field of monocular depth estimation, this paper focuses on adversarial 3D shapes instead of painted textures or lighting. According to our results, the adversarial 3D shapes can worsen the performance of monocular depth estimators, achieving an effect size comparable to the differences between subsequent state-of-the-art models. We published our implementation at: https://github.com/mntusr/threedattack1
引用
收藏
页码:138 / 142
页数:5
相关论文
共 50 条
  • [31] Monocular 3D object detection with thermodynamic loss and decoupled instance depth
    Liu, Gang
    Xie, Xiaoxiao
    Yu, Qingchen
    CONNECTION SCIENCE, 2024, 36 (01)
  • [32] MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer
    Huang, Kuan-Chih
    Wu, Tsung-Han
    Su, Hung-Ting
    Hsu, Winston H.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4002 - 4011
  • [33] RGB-Fusion: Monocular 3D reconstruction with learned depth prediction
    Duan, ZhiMin
    Chen, YingWen
    Yu, HuJie
    Hu, BoWen
    Chen, Chen
    DISPLAYS, 2021, 70
  • [34] MonoDFNet: Monocular 3D Object Detection with Depth Fusion and Adaptive Optimization
    Gao, Yuhan
    Wang, Peng
    Li, Xiaoyan
    Sun, Mengyu
    Di, Ruohai
    Li, Liangliang
    Hong, Wei
    SENSORS, 2025, 25 (03)
  • [35] Exploiting Ground Depth Estimation for Mobile Monocular 3D Object Detection
    Zhou, Yunsong
    Liu, Quan
    Zhu, Hongzi
    Li, Yunzhe
    Chang, Shan
    Guo, Minyi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 3079 - 3093
  • [36] DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection
    Peng, Liang
    Wu, Xiaopei
    Yang, Zheng
    Liu, Haifeng
    Cai, Deng
    COMPUTER VISION - ECCV 2022, PT I, 2022, 13661 : 71 - 88
  • [37] Monocular Depth Estimation for 3D Map Construction at Underground Parking Structures
    Li, Jingwen
    Song, Xuedong
    Gao, Ruipeng
    Tao, Dan
    ELECTRONICS, 2023, 12 (11)
  • [38] Depth-discriminative Metric Learning for Monocular 3D Object Detection
    Choi, Wonhyeok
    Shin, Mingyu
    Im, Sunghoon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Synthetic Depth Transfer for Monocular 3D Object Pose Estimation in the Wild
    Kao, Yueying
    Li, Weiming
    Wang, Qiang
    Lin, Zhouchen
    Kim, Wooshik
    Hong, Sunghoon
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11221 - 11228
  • [40] Learning Depth-Guided Convolutions for Monocular 3D Object Detection
    Ng, Mingyu
    Huo, Yuqi
    Yi, Hongwei
    Wang, Zhe
    Shi, Jianping
    Lu, Zhiwu
    Luo, Ping
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4306 - 4315