Bidirectional Attention Network for Monocular Depth Estimation

被引:28
|
作者
Aich, Shubhra [1 ]
Vianney, Jean Marie Uwabeza [1 ]
Islam, Md Amirul [1 ]
Kaur, Mannat [1 ]
Liu, Bingbing [1 ]
机构
[1] Huawei Technol, Noahs Ark Lab, Markham, ON L3R 5Y1, Canada
关键词
D O I
10.1109/ICRA48506.2021.9560885
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a Bidirectional Attention Network (BANet), an end-to-end framework for monocular depth estimation (MDE) that addresses the limitation of effectively integrating local and global information in convolutional neural networks. The structure of this mechanism derives from a strong conceptual foundation of neural machine translation, and presents a light-weight mechanism for adaptive control of computation similar to the dynamic nature of recurrent neural networks. We introduce bidirectional attention modules that utilize the feed-forward feature maps and incorporate the global context to filter out ambiguity. Extensive experiments reveal the high degree of capability of this bidirectional attention model over feed-forward baselines and other state-of-the-art methods for monocular depth estimation on two challenging datasets - KITTI and DIODE. We show that our proposed approach either outperforms or performs at least on a par with the state-of-the-art monocular depth estimation methods with less memory and computational complexity.
引用
收藏
页码:11746 / 11752
页数:7
相关论文
共 50 条
  • [21] Attention based multilayer feature fusion convolutional neural network for unsupervised monocular depth estimation
    Lei, Zeyu
    Wang, Yan
    Li, Zijian
    Yang, Junyao
    [J]. NEUROCOMPUTING, 2021, 423 : 343 - 352
  • [22] Monocular depth estimation with multi-view attention autoencoder
    Geunho Jung
    Sang Min Yoon
    [J]. Multimedia Tools and Applications, 2022, 81 : 33759 - 33770
  • [23] Lightweight monocular absolute depth estimation based on attention mechanism
    Jin, Jiayu
    Tao, Bo
    Qian, Xinbo
    Hu, Jiaxin
    Li, Gongfa
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [24] Attention-Based Grasp Detection With Monocular Depth Estimation
    Xuan Tan, Phan
    Hoang, Dinh-Cuong
    Nguyen, Anh-Nhat
    Nguyen, Van-Thiep
    Vu, Van-Duc
    Nguyen, Thu-Uyen
    Hoang, Ngoc-Anh
    Phan, Khanh-Toan
    Tran, Duc-Thanh
    Vu, Duy-Quang
    Ngo, Phuc-Quan
    Duong, Quang-Tri
    Ho, Ngoc-Trung
    Tran, Cong-Trinh
    Duong, Van-Hiep
    Mai, Anh-Truong
    [J]. IEEE ACCESS, 2024, 12 : 65041 - 65057
  • [25] Radar Fusion Monocular Depth Estimation Based on Dual Attention
    Long, JianYu
    Huang, JinGui
    Wang, ShengChun
    [J]. ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 166 - 179
  • [26] DEEP MONOCULAR VIDEO DEPTH ESTIMATION USING TEMPORAL ATTENTION
    Ren, Haoyu
    El-khamy, Mostafa
    Lee, Jungwon
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1988 - 1992
  • [27] Monocular Depth Estimation with Optical Flow Attention for Autonomous Drones
    Shimhada, Tomoyasu
    Nishikawa, Hiroki
    Kong, Xiangbo
    Tomiyama, Hiroyuki
    [J]. 2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, : 197 - 198
  • [28] Monocular depth estimation with multi-view attention autoencoder
    Jung, Geunho
    Yoon, Sang Min
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (23) : 33759 - 33770
  • [29] MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation
    Yasarla, Rajeev
    Cai, Hong
    Jeong, Jisoo
    Shi, Yunxiao
    Garrepalli, Risheek
    Porikli, Fatih
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8720 - 8730
  • [30] Boosting Monocular Depth Estimation with Channel Attention and Mutual Learning
    Takagi, Kazunari
    Ito, Seiya
    Kaneko, Naoshi
    Sumi, Kazuhiko
    [J]. 2019 JOINT 8TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2019 3RD INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR) WITH INTERNATIONAL CONFERENCE ON ACTIVITY AND BEHAVIOR COMPUTING (ABC), 2019, : 228 - 233