Structure-aware dehazing of sewer inspection images based on monocular depth cues

被引:0
|
作者
Xia, Zixia [1 ]
Guo, Shuai [2 ]
Sun, Di [3 ]
Lv, Yaozhi [4 ]
Li, Honglie [5 ]
Pan, Gang [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, 135 Yaguan Rd, Tianjin 300350, Peoples R China
[2] Hefei Univ Technol, Dept Municipal Engn, Hefei, Anhui, Peoples R China
[3] Tianjin Univ Sci & Technol, Coll Artificial Intelligence, Tianjin, Peoples R China
[4] Tianjin Municipal Engn Design & Res Inst, Key Lab Infrastruct Durabil, Tianjin, Peoples R China
[5] PipeChina, West East Gas Pipeline Co, Shanghai, Peoples R China
关键词
CRACK DETECTION; DEFECTS;
D O I
10.1111/mice.12900
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In sewer pipes, haze caused by the humid environment seriously impairs the quality of closed-circuit television (CCTV) images, which leads to poor performance of subsequent pipe defects detection. Meanwhile, the complexity of sewer images, such as steep depth change and extensive textureless regions, brings great challenges to the performance or application of general dehazing algorithms. Therefore, this study estimates sewer depth maps first with the help of the water-pipewall borderlines to produce the paired dehazing dataset. Then a structure-aware nonlocal network (SANL-Net) is proposed with the detected borderlines and the dehazing result as two supervisory signals. SANL-Net shows its superiority over other state-of-the-art approaches with 147 in mean square error (MSE), 27.28 in peak signal to noise ratio (PSNR), 0.8963 in structural similarity index measure (SSIM), and 15.47M in parameters. Also, the outstanding performance in real image dehazing implies the accuracy of depth estimation. Experimental results indicate that SANL-Net significantly improves the performance of defects detection tasks, such as an increase of 23.16% in mean intersection over union (mIoU) for semantic segmentation.
引用
收藏
页码:762 / 778
页数:17
相关论文
共 50 条
  • [1] Structure-Aware Residual Pyramid Network for Monocular Depth Estimation
    Chen, Xiaotian
    Chen, Xuejin
    Zha, Zheng-Jun
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 694 - 700
  • [2] Characteristics of bistable perception of images with monocular depth cues
    Podvigina, D.
    [J]. PERCEPTION, 2013, 42 : 178 - 179
  • [3] StruMonoNet: Structure-Aware Monocular 3D Prediction
    Yang, Zhenpei
    Li, Li Erran
    Huang, Qixing
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7409 - 7418
  • [4] StruMonoNet: Structure-Aware Monocular 3D Prediction
    Yang, Zhenpei
    Li, Li Erran
    Huang, Qixing
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, : 7409 - 7418
  • [5] Hierarchical anatomical structure-aware based thoracic CT images registration
    He, Yuanbo
    Wang, Aoyu
    Li, Shuai
    Hao, Aimin
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 148
  • [6] Structure-aware Priority Belief Propagation for Depth Estimation
    Ju, Kuanyu
    Wang, Botao
    Xiong, Hongkai
    [J]. 2015 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2015,
  • [7] Neural correlates of monocular and binocular depth cues based on natural images: A LORETA analysis
    Fischmeister, Florian Ph. S.
    Bauer, Herbert
    [J]. VISION RESEARCH, 2006, 46 (20) : 3373 - 3380
  • [8] Depth completion method based on multi-guided structure-aware networks
    Sun H.
    Jin Y.-Q.
    Zhang W.-A.
    Fu M.-L.
    [J]. Kongzhi yu Juece/Control and Decision, 2024, 39 (02): : 401 - 410
  • [9] Distortion-Aware Monocular Depth Estimation for Omnidirectional Images
    Chen, Hong-Xiang
    Li, Kunhong
    Fu, Zhiheng
    Li, Mengyi
    Chen, Zonghao
    Guo, Yulan
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 (28) : 334 - 338
  • [10] Structure-Aware Cross-Modal Transformer for Depth Completion
    Zhao, Linqing
    Wei, Yi
    Li, Jiaxin
    Zhou, Jie
    Lu, Jiwen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1016 - 1031