Radar Fusion Monocular Depth Estimation Based on Dual Attention

被引:1
|
作者
Long, JianYu [1 ]
Huang, JinGui [1 ]
Wang, ShengChun [1 ]
机构
[1] Hunan Normal Univ, Changsha 410006, Peoples R China
关键词
Monocular depth estimation; Radar; Attention; nuScenes;
D O I
10.1007/978-3-031-06794-5_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we explore the integration of multimodal data into monocular depth estimation. Monocular depth estimation is performed by fusing RGB data with sparse radar data. Since the existing fusion method does not take into account the correlation between the two types of data in the channel and in space, it lacks the representation of the global information relationship on the channel and in space. Therefore, we propose a feature fusion module (DAF) based on the dual attention mechanism. The dual attention fusion module improves the global information representation capability of the model by modeling the dynamic and non-linear relationship of the two kinds of data in the channel and space, adaptively recalibrates the response to each feature, and maximizes the use of radar data. At the same time, DAF can reduce noise interference in radar data by weighting features, avoiding the loss of secondary details caused by filtering operations, and alleviating the problem of excessive noise in radar data. Finally, due to the influence of the complex weather environment and the model itself, it is difficult for the model to obtain an effective feature representation in the complex weather environment. Therefore, we introduced a batch loss function to enable the model to focus on feature extraction in a complex environment, so as to obtain a more accurate representation of feature information. It reduces model errors and speeds up the convergence of the model. The experiment was conducted on the recently released nuScenes dataset, which provides data records of the entire sensor suite of autonomous vehicles. Experiments prove that our method is superior to other fusion methods.
引用
收藏
页码:166 / 179
页数:14
相关论文
共 50 条
  • [1] Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion
    Wen, Jing
    Ma, Haojiang
    Yang, Jie
    Zhang, Songsong
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X, 2024, 14434 : 358 - 370
  • [2] UNSUPERVISED MONOCULAR DEPTH ESTIMATION BASED ON DUAL ATTENTION MECHANISM AND DEPTH-AWARE LOSS
    Ye, Xinchen
    Zhang, Mingliang
    Xu, Rui
    Zhong, Wei
    Fan, Xin
    Liu, Zhu
    Zhang, Jiaao
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 169 - 174
  • [4] Attention based multilayer feature fusion convolutional neural network for unsupervised monocular depth estimation
    Lei, Zeyu
    Wang, Yan
    Li, Zijian
    Yang, Junyao
    [J]. NEUROCOMPUTING, 2021, 423 : 343 - 352
  • [5] Attention-Based Grasp Detection With Monocular Depth Estimation
    Xuan Tan, Phan
    Hoang, Dinh-Cuong
    Nguyen, Anh-Nhat
    Nguyen, Van-Thiep
    Vu, Van-Duc
    Nguyen, Thu-Uyen
    Hoang, Ngoc-Anh
    Phan, Khanh-Toan
    Tran, Duc-Thanh
    Vu, Duy-Quang
    Ngo, Phuc-Quan
    Duong, Quang-Tri
    Ho, Ngoc-Trung
    Tran, Cong-Trinh
    Duong, Van-Hiep
    Mai, Anh-Truong
    [J]. IEEE ACCESS, 2024, 12 : 65041 - 65057
  • [6] Lightweight monocular absolute depth estimation based on attention mechanism
    Jin, Jiayu
    Tao, Bo
    Qian, Xinbo
    Hu, Jiaxin
    Li, Gongfa
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [7] DAttNet: monocular depth estimation network based on attention mechanisms
    Astudillo, Armando
    Barrera, Alejandro
    Guindel, Carlos
    Al-Kaff, Abdulla
    Garcia, Fernando
    [J]. NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07): : 3347 - 3356
  • [8] DAttNet: monocular depth estimation network based on attention mechanisms
    Armando Astudillo
    Alejandro Barrera
    Carlos Guindel
    Abdulla Al-Kaff
    Fernando García
    [J]. Neural Computing and Applications, 2024, 36 : 3347 - 3356
  • [9] Transfer2Depth: Dual Attention Network With Transfer Learning for Monocular Depth Estimation
    Yeh, Chia-Hung
    Huang, Yao-Pao
    Lin, Chih-Yang
    Chang, Chuan-Yu
    [J]. IEEE ACCESS, 2020, 8 : 86081 - 86090
  • [10] Unsupervised Monocular Depth Estimation Based on Dense Feature Fusion
    Chen Ying
    Wang Yiliang
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (10) : 2976 - 2984