EdgeConv with Attention Module for Monocular Depth Estimation

被引:9
|
作者
Lee, Minhyeok [1 ]
Hwang, Sangwon [1 ]
Park, Chaewon [1 ]
Lee, Sangyoun [1 ]
机构
[1] Yonsei Univ, Seoul, South Korea
关键词
D O I
10.1109/WACV51458.2022.00242
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation is an especially important task in robotics and autonomous driving, where 3D structural information is essential. However, extreme lighting conditions and complex surface objects make it difficult to predict depth in a single image. Therefore, to generate accurate depth maps, it is important for the model to learn structural information about the scene. We propose a novel Patch-Wise EdgeConv Module (PEM) and EdgeConv Attention Module (EAM) to solve the difficulty of monocular depth estimation. The proposed modules extract structural information by learning the relationship between image patches close to each other in space using edge convolution. Our method is evaluated on two popular datasets, the NYU Depth V2 and the KITH Eigen split, achieving state-of-the-art performance. We prove that the proposed model predicts depth robustly in challenging scenes through various comparative experiments.
引用
收藏
页码:2364 / 2373
页数:10
相关论文
共 50 条
  • [1] Pyramid frequency network with spatial attention residual refinement module for monocular depth estimation
    Lu, Zhengyang
    Chen, Ying
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [2] Monocular Depth Estimation with Adaptive Geometric Attention
    Naderi, Taher
    Sadovnik, Amir
    Hayward, Jason
    Qi, Hairong
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 617 - 627
  • [3] Bidirectional Attention Network for Monocular Depth Estimation
    Aich, Shubhra
    Vianney, Jean Marie Uwabeza
    Islam, Md Amirul
    Kaur, Mannat
    Liu, Bingbing
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11746 - 11752
  • [4] Depth-Relative Self Attention for Monocular Depth Estimation
    Shim, Kyuhong
    Kim, Jiyoung
    Lee, Gusang
    Shim, Byonghyo
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1396 - 1404
  • [5] LAM-Depth: Laplace-Attention Module-Based Self-Supervised Monocular Depth Estimation
    Wei, Jiansheng
    Pan, Shuguo
    Gao, Wang
    Guo, Peng
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [6] Self-supervised coarse-to-fine monocular depth estimation using a lightweight attention module
    Yuanzhen Li
    Fei Luo
    Chunxia Xiao
    [J]. Computational Visual Media, 2022, 8 : 631 - 647
  • [7] Triaxial Squeeze Attention Module and Mutual-Exclusion Loss Based Unsupervised Monocular Depth Estimation
    Wei, Jiansheng
    Pan, Shuguo
    Gao, Wang
    Zhao, Tao
    [J]. NEURAL PROCESSING LETTERS, 2022, 54 (05) : 4375 - 4390
  • [8] Trap Attention: Monocular Depth Estimation with Manual Traps
    Ning, Chao
    Gan, Hongping
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5033 - 5043
  • [9] Unsupervised Monocular Depth Estimation With Channel and Spatial Attention
    Wang, Zhuping
    Dai, Xinke
    Guo, Zhanyu
    Huang, Chao
    Zhang, Hao
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7860 - 7870
  • [10] Self-supervised coarse-to-fine monocular depth estimation using a lightweight attention module
    Li, Yuanzhen
    Luo, Fei
    Xiao, Chunxia
    [J]. COMPUTATIONAL VISUAL MEDIA, 2022, 8 (04) : 631 - 647