An automatic building façade deterioration detection system using infrared-visible image fusion and deep learning

被引:3
|
作者
Wang, Pujin [1 ]
Xiao, Jianzhuang [1 ]
Qiang, Xingxing [2 ]
Xiao, Rongwei [3 ]
Liu, Yi [3 ]
Sun, Chang [4 ]
Hu, Jianhui [5 ]
Liu, Shijie [6 ,7 ]
机构
[1] Tongji Univ, Coll Civil Engn, Shanghai 200092, Peoples R China
[2] Hangzhou Kuaishouge Intelligent Technol Co Ltd, Hangzhou 310015, Peoples R China
[3] Shanghai Shuangying Aviat Technol Co Ltd, Shanghai 201108, Peoples R China
[4] Univ Shanghai Sci & Technol, Sch Environm & Architecture, Shanghai 200093, Peoples R China
[5] Shanghai Jiao Tong Univ, Space Struct Res Ctr, State Key Lab Ocean Engn, Shanghai 200240, Peoples R China
[6] Chinese Acad Sci, Shanghai Inst Tech Phys, Shanghai 200083, Peoples R China
[7] UCAS, Hangzhou Inst Adv Study, Hangzhou 310024, Peoples R China
来源
关键词
Building fa & ccedil; ade; Deterioration detection; Infrared -visible image fusion; GAN; Instance segmentation; Deep learning; OBJECT DETECTION; INFORMATION;
D O I
10.1016/j.jobe.2024.110122
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Diverse building fa & ccedil;ade deteriorations occurring both externally and internally within the materials have emerged as substantial challenges to the structural durability and the occupant safety. Nevertheless, prevailing evaluation and detection endeavors for these deteriorations have predominantly hinged on the scrutiny of surface-level visual data, overlooking the integrative potential offered by the infrared imagery that unveils deteriorations transpiring at certain depths, including those associated with moisture and plaster detachment. Therefore, this study proposes a novel hybrid method for automatic building fa & ccedil;ade deterioration detection by seamlessly integrating the cross-referenced infrared and visible images using deep learning. A dataset comprising 1228 pairs of infrared and visible images, representing four key deteriorations-crack, spalling, moisture-related damage, and plaster detachment-is collected for training and validation. An infrared-visible image fusion (IVIF) module based on the generative adversarial network (GAN) is subsequently trained to concurrently preserve deterioration characteristics evident in either of the image modalities. Four instance segmentation models are trained and compared afterwards. The outcomes substantiate the accomplished IVIF method, validated through both high-performing qualitative and quantitative assessments. The noteworthy high mean average precision (mAP) result of 86.5 % obtained through the subsequent instance segmentation module affirm a thorough utilization of the complementary information, thereby enhancing decision-making processes crucial for the maintenance of building fa & ccedil;ades throughout their service life.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] An Improved Infrared and Visible Image Fusion Using an Adaptive Contrast Enhancement Method and Deep Learning Network with Transfer Learning
    Bhutto, Jameel Ahmed
    Tian, Lianfang
    Du, Qiliang
    Sun, Zhengzheng
    Yu, Lubin
    Soomro, Toufique Ahmed
    REMOTE SENSING, 2022, 14 (04)
  • [32] Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss
    Xu, Dongdong
    Wang, Yongcheng
    Zhang, Xin
    Zhang, Ning
    Yu, Sibo
    IEEE ACCESS, 2020, 8 : 206445 - 206458
  • [33] A deep learning based relative clarity classification method for infrared and visible image fusion
    Abera, Deboch Eyob
    Qi, Jin
    Cheng, Jian
    INFRARED PHYSICS & TECHNOLOGY, 2024, 140
  • [34] Infrared and Visible Image Fusion: Statistical Analysis, Deep Learning Approaches and Future Prospects
    Wu Yifei
    Yang Rui
    Lu Qishen
    Tang Yuting
    Zhang Chengmin
    Liu Shuaihui
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (14)
  • [35] AUTOMATIC BUILDING DETECTION WITH FEATURE SPACE FUSION USING ENSEMBLE LEARNING
    Senaras, Caglar
    Yuksel, Baris
    Ozay, Mete
    Yarman-Vural, Fatos
    2012 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2012, : 6713 - 6716
  • [36] Infrared and Visible Image Fusion with Deep Neural Network in Enhanced Flight Vision System
    Gao, Xuyang
    Shi, Yibing
    Zhu, Qi
    Fu, Qiang
    Wu, Yuezhou
    REMOTE SENSING, 2022, 14 (12)
  • [37] INFRARED AND VISIBLE IMAGE FUSION USING SALIENCY DETECTION BASED ON SHEARLET TRANSFORM
    Fei, Chun
    Zhang, Ping
    Tian, Ming
    Wang, Xiaowei
    Wu, Jiang
    2016 13TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICCWAMTIP), 2016, : 273 - 276
  • [38] Infrared and Visible Image Fusion Using Modified PCNN and Visual Saliency Detection
    Ding, Zhaisheng
    Zhou, Dongming
    Nie, Rencan
    Hou, Ruichao
    Liu, Yanyu
    2018 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2018, 10836
  • [39] Automatic image captioning system using a deep learning approach
    Deepak, Gerard
    Gali, Sowmya
    Sonker, Abhilash
    Jos, Bobin Cherian
    Sagar, K. V. Daya
    Singh, Charanjeet
    SOFT COMPUTING, 2023,
  • [40] Infrared-Visible Image Fusion Using Dual-Branch Auto-Encoder With Invertible High-Frequency Encoding
    Liu, Honglin
    Mao, Qirong
    Dong, Ming
    Zhan, Yongzhao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2675 - 2688