EdgeConv with Attention Module for Monocular Depth Estimation

被引:9
|
作者
Lee, Minhyeok [1 ]
Hwang, Sangwon [1 ]
Park, Chaewon [1 ]
Lee, Sangyoun [1 ]
机构
[1] Yonsei Univ, Seoul, South Korea
关键词
D O I
10.1109/WACV51458.2022.00242
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation is an especially important task in robotics and autonomous driving, where 3D structural information is essential. However, extreme lighting conditions and complex surface objects make it difficult to predict depth in a single image. Therefore, to generate accurate depth maps, it is important for the model to learn structural information about the scene. We propose a novel Patch-Wise EdgeConv Module (PEM) and EdgeConv Attention Module (EAM) to solve the difficulty of monocular depth estimation. The proposed modules extract structural information by learning the relationship between image patches close to each other in space using edge convolution. Our method is evaluated on two popular datasets, the NYU Depth V2 and the KITH Eigen split, achieving state-of-the-art performance. We prove that the proposed model predicts depth robustly in challenging scenes through various comparative experiments.
引用
收藏
页码:2364 / 2373
页数:10
相关论文
共 50 条
  • [31] Attention-Based Dense Decoding Network for Monocular Depth Estimation
    Wang, Jianrong
    Zhang, Ge
    Yu, Mei
    Xu, Tianyi
    Luo, Tao
    [J]. IEEE ACCESS, 2020, 8 : 85802 - 85812
  • [32] Monocular Depth Estimation Using Res-UNet with an Attention Model
    Jan, Abdullah
    Seo, Suyoung
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (10):
  • [33] Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation
    Xu, Dan
    Wang, Wei
    Tang, Hao
    Liu, Hong
    Sebe, Nicu
    Ricci, Elisa
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3917 - 3925
  • [34] Attention-based context aggregation network for monocular depth estimation
    Chen, Yuru
    Zhao, Haitao
    Hu, Zhengwei
    Peng, Jingchao
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (06) : 1583 - 1596
  • [35] Attention-based context aggregation network for monocular depth estimation
    Yuru Chen
    Haitao Zhao
    Zhengwei Hu
    Jingchao Peng
    [J]. International Journal of Machine Learning and Cybernetics, 2021, 12 : 1583 - 1596
  • [36] Transfer2Depth: Dual Attention Network With Transfer Learning for Monocular Depth Estimation
    Yeh, Chia-Hung
    Huang, Yao-Pao
    Lin, Chih-Yang
    Chang, Chuan-Yu
    [J]. IEEE ACCESS, 2020, 8 : 86081 - 86090
  • [37] UNSUPERVISED MONOCULAR DEPTH ESTIMATION BASED ON DUAL ATTENTION MECHANISM AND DEPTH-AWARE LOSS
    Ye, Xinchen
    Zhang, Mingliang
    Xu, Rui
    Zhong, Wei
    Fan, Xin
    Liu, Zhu
    Zhang, Jiaao
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 169 - 174
  • [38] Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss
    Jiao, Jianbo
    Cao, Ying
    Song, Yibing
    Lau, Rynson
    [J]. COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 55 - 71
  • [39] Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion
    Wen, Jing
    Ma, Haojiang
    Yang, Jie
    Zhang, Songsong
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X, 2024, 14434 : 358 - 370
  • [40] Multi-scale Residual Pyramid Attention Network for Monocular Depth Estimation
    Liu, Jing
    Zhang, Xiaona
    Li, Zhaoxin
    Mao, Tianlu
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5137 - 5144