Attention-based context aggregation network for monocular depth estimation

被引:0
|
作者
Yuru Chen
Haitao Zhao
Zhengwei Hu
Jingchao Peng
机构
[1] East China University of Science and Technology,School of Information Science and Engineering
关键词
Depth estimation; Attention model; Context aggregation; Convolutional neural networks; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
Depth estimation is a traditional computer vision task, which plays a crucial role in understanding 3D scene geometry. Recently, algorithms that combine the multi-scale features extracted by the dilated convolution based block (atrous spatial pyramid pooling, ASPP) have gained significant improvements in depth estimation. However, the discretized and predefined dilation kernels cannot capture the continuous context information that differs in diverse scenes and easily introduce the grid artifacts. This paper proposes a novel algorithm, called attention-based context aggregation network (ACAN) for depth estimation. A supervised self-attention model is designed and utilized to adaptively learn the task-specific similarities between different pixels to model the continuous context information. Moreover, a soft ordinal inference is proposed to transform the predicted probabilities to continuous depth values which reduce the discretization error (about 1% decrease in RMSE). ACAN achieves state-of-the-art performance on public monocular depth-estimation benchmark datasets. The source code of ACAN can be found in https://github.com/miraiaroha/ACAN.
引用
收藏
页码:1583 / 1596
页数:13
相关论文
共 50 条
  • [1] Attention-based context aggregation network for monocular depth estimation
    Chen, Yuru
    Zhao, Haitao
    Hu, Zhengwei
    Peng, Jingchao
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (06) : 1583 - 1596
  • [2] Attention-Based Dense Decoding Network for Monocular Depth Estimation
    Wang, Jianrong
    Zhang, Ge
    Yu, Mei
    Xu, Tianyi
    Luo, Tao
    [J]. IEEE ACCESS, 2020, 8 (08): : 85802 - 85812
  • [3] Attention-Based Grasp Detection With Monocular Depth Estimation
    Xuan Tan, Phan
    Hoang, Dinh-Cuong
    Nguyen, Anh-Nhat
    Nguyen, Van-Thiep
    Vu, Van-Duc
    Nguyen, Thu-Uyen
    Hoang, Ngoc-Anh
    Phan, Khanh-Toan
    Tran, Duc-Thanh
    Vu, Duy-Quang
    Ngo, Phuc-Quan
    Duong, Quang-Tri
    Ho, Ngoc-Trung
    Tran, Cong-Trinh
    Duong, Van-Hiep
    Mai, Anh-Truong
    [J]. IEEE ACCESS, 2024, 12 : 65041 - 65057
  • [4] Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation
    Yan, Jiaxing
    Zhao, Hong
    Bu, Penghui
    Jin, YuSheng
    [J]. 2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 464 - 473
  • [5] Online supervised attention-based recurrent depth estimation from monocular video
    Maslov, Dmitrii
    Makarov, Ilya
    [J]. PEERJ COMPUTER SCIENCE, 2020,
  • [6] Online supervised attention-based recurrent depth estimation from monocular video
    Maslov D.
    Makarov I.
    [J]. Maslov, Dmitrii (dvmaslov@edu.hse.ru), 1600, PeerJ Inc. (06): : 1 - 22
  • [7] DAttNet: monocular depth estimation network based on attention mechanisms
    Astudillo, Armando
    Barrera, Alejandro
    Guindel, Carlos
    Al-Kaff, Abdulla
    Garcia, Fernando
    [J]. NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07): : 3347 - 3356
  • [8] DAttNet: monocular depth estimation network based on attention mechanisms
    Armando Astudillo
    Alejandro Barrera
    Carlos Guindel
    Abdulla Al-Kaff
    Fernando García
    [J]. Neural Computing and Applications, 2024, 36 : 3347 - 3356
  • [9] Bidirectional Attention Network for Monocular Depth Estimation
    Aich, Shubhra
    Vianney, Jean Marie Uwabeza
    Islam, Md Amirul
    Kaur, Mannat
    Liu, Bingbing
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11746 - 11752
  • [10] ATTENTION-BASED SELF-SUPERVISED LEARNING MONOCULAR DEPTH ESTIMATION WITH EDGE REFINEMENT
    Jiang, Chenweinan
    Liu, Haichun
    Li, Lanzhen
    Pan, Changchun
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3218 - 3222