Triaxial Squeeze Attention Module and Mutual-Exclusion Loss Based Unsupervised Monocular Depth Estimation

被引:0
|
作者
Jiansheng Wei
Shuguo Pan
Wang Gao
Tao Zhao
机构
[1] Southeast University,School of Instrument Science and Engineering
来源
Neural Processing Letters | 2022年 / 54卷
关键词
Depth estimation; Unsupervised; Stereo images;
D O I
暂无
中图分类号
学科分类号
摘要
Monocular depth estimation plays a crucial role in scene perception and 3D reconstruction. Supervised learning based depth estimation needs vast amounts of ground-truth depth data for training, which seriously restricts its generalization. In recent years, the unsupervised learning methods without LiDAR points cloud have attracted more and more attention. In this paper, an unsupervised monocular depth estimation method using stereo pairs for training is designed. We present a triaxial squeeze attention module and introduce it into our unsupervised framework to augment the representations of the depth map in detail. We also propose a novel training loss that enforces mutual-exclusion in image reconstruction to improve the performance and robustness in unsupervised learning. Experimental results on KITTI show that our method not only outperforms existing unsupervised methods but also achieves better results comparable with several supervised approaches trained with ground-truth data. The improvements in our method can better preserve the details of the depth map and allow the shape of objects to be maintained more smoothly.
引用
收藏
页码:4375 / 4390
页数:15
相关论文
共 50 条
  • [21] Monocular Depth Estimation with Joint Attention Feature Distillation and Wavelet-Based Loss Function
    Liu, Peng
    Zhang, Zonghua
    Meng, Zhaozong
    Gao, Nan
    [J]. SENSORS, 2021, 21 (01) : 1 - 21
  • [22] Attention-Based Grasp Detection With Monocular Depth Estimation
    Xuan Tan, Phan
    Hoang, Dinh-Cuong
    Nguyen, Anh-Nhat
    Nguyen, Van-Thiep
    Vu, Van-Duc
    Nguyen, Thu-Uyen
    Hoang, Ngoc-Anh
    Phan, Khanh-Toan
    Tran, Duc-Thanh
    Vu, Duy-Quang
    Ngo, Phuc-Quan
    Duong, Quang-Tri
    Ho, Ngoc-Trung
    Tran, Cong-Trinh
    Duong, Van-Hiep
    Mai, Anh-Truong
    [J]. IEEE ACCESS, 2024, 12 : 65041 - 65057
  • [23] Lightweight monocular absolute depth estimation based on attention mechanism
    Jin, Jiayu
    Tao, Bo
    Qian, Xinbo
    Hu, Jiaxin
    Li, Gongfa
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [24] Unsupervised Monocular Depth Estimation and Visual Odometry Based on Generative Adversarial Network and Self-attention Mechanism
    Ye X.
    He Y.
    Ru S.
    [J]. Jiqiren/Robot, 2021, 43 (02): : 203 - 213
  • [25] Pyramid frequency network with spatial attention residual refinement module for monocular depth estimation
    Lu, Zhengyang
    Chen, Ying
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [26] Radar Fusion Monocular Depth Estimation Based on Dual Attention
    Long, JianYu
    Huang, JinGui
    Wang, ShengChun
    [J]. ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 166 - 179
  • [27] Unsupervised Depth Estimation from Monocular Video based on Relative Motion
    Cao, Hui
    Wang, Chao
    Wang, Ping
    Zou, Qingquan
    Xiao, Xiao
    [J]. 2018 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MACHINE LEARNING (SPML 2018), 2018, : 159 - 165
  • [28] DAttNet: monocular depth estimation network based on attention mechanisms
    Astudillo, Armando
    Barrera, Alejandro
    Guindel, Carlos
    Al-Kaff, Abdulla
    Garcia, Fernando
    [J]. NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07): : 3347 - 3356
  • [29] DAttNet: monocular depth estimation network based on attention mechanisms
    Armando Astudillo
    Alejandro Barrera
    Carlos Guindel
    Abdulla Al-Kaff
    Fernando García
    [J]. Neural Computing and Applications, 2024, 36 : 3347 - 3356
  • [30] Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss
    Jiao, Jianbo
    Cao, Ying
    Song, Yibing
    Lau, Rynson
    [J]. COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 55 - 71