Automated Bridge Coating Defect Recognition Using U-net Fully Convolutional Neural Networks

被引:0
|
作者
Huang I.-F. [1 ]
Chen P.-H. [2 ]
Chen S.-K. [1 ]
机构
[1] Department of Civil Engineering, National Taiwan University, Taipei
[2] Department of Building, Civil and Environmental Engineering, The Gina Cody School of Engineering and Computer Science, Concordia University, Montreal, QC
关键词
Deep learning; Pixel-level; Rust; U-net;
D O I
10.6652/JoCICHE.202112_33(8).0002
中图分类号
学科分类号
摘要
As the weather in Taiwan is mostly warm and humid, steel bridges get rusted easily. For rusting is one of the most significant factors in steel bridge maintenance, together with the crucial role that steel bridges play in most countries, it is important to develop effective steel bridge rust detection methods to enhance steel bridge health and safety, and lower the lifecycle cost of steel bridges. Some image processing techniques (IPTs) have been developed in prior research to quickly and effectively detect rust defects of steel bridges. The keys to rust defect recognition are the discrimination of rust spots from the background that may contain rust-like noises and the handling of non-uniform illumination. In order to detect rust spots in a more effective and efficient fashion, this paper explored a new rust recognition method that integrates a deep-learning-based fully convolutional neural network, namely U-net, and a newly developed image semantic segmentation model, to provide pixel-wise steel bridge rust defect recognition. © 2021, Chinese Institute of Civil and Hydraulic Engineering. All right reserved.
引用
收藏
页码:605 / 617
页数:12
相关论文
共 49 条
  • [11] Fang X., Luo H., Tang J., Structural damage detection using neural network with learning rate improvement, Computer & Structures, 25, pp. 2150-2161, (2005)
  • [12] Wu X., Ghaboussi J., Garrett J., Use of neural networks in detection of structural damage, Computer & Structures, 42, pp. 578-581, (1992)
  • [13] Shitong H, Dong B., Wang H. C., Wu G., Inspection of surface defects on stay cables using a robot and transfer learning, Automation in Construction, 119, (2020)
  • [14] Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, (2017)
  • [15] Caelli T., Reye D., On the classification of image regions by colour, texture and shape, Pattern Recognition, 26, 4, pp. 461-470, (1993)
  • [16] Esgiar A. N., Naguib R. N., Sharif B. S., Bennett M. K., Murry A., Microscopic image analysis for quantitative measurement and feature identification of normal and cancerous colonic mucosa, IEEE Transactions on Information Technology in Biomedicine, 2, 3, pp. 197-203, (1998)
  • [17] Freeman H., On the encoding of arbitrary geometric configurations, IRE Transactions on Electronic Computers, EC-10, 2, pp. 260-268, (1961)
  • [18] Gunsel B., Murat Tekalp A., Shape similarity matching for query-by-example, Pattern Recognition, 31, 7, pp. 931-944, (1998)
  • [19] Han J., Ma K. K., Fuzzy color histogram and its use in color image retrieval, IEEE Transactions on Image Processing, 11, 8, pp. 944-952, (2002)
  • [20] Haralick R., Shanmugam K., Dinstein I., Textural features for image classification, IEEE Transactions on Systems, SMC-3, 6, pp. 610-621, (1973)